Test Report: KVM_Linux_crio 18007

                    
                      fc27285b44a3684906f383c28cb886ae15cd7524:2024-01-30:32829
                    
                

Test fail (29/310)

Order failed test Duration
39 TestAddons/parallel/Ingress 158.06
53 TestAddons/StoppedEnableDisable 154.13
81 TestFunctional/serial/CacheCmd/cache/add_local 1.28
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.13
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.7
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.76
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.18
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.26
169 TestIngressAddonLegacy/serial/ValidateIngressAddons 174.97
224 TestMultiNode/serial/RestartKeepsNodes 694.67
226 TestMultiNode/serial/StopMultiNode 142.24
233 TestPreload 276.54
294 TestStartStop/group/no-preload/serial/Stop 138.79
296 TestStartStop/group/embed-certs/serial/Stop 138.78
299 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.79
300 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
301 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.39
306 TestStartStop/group/old-k8s-version/serial/Stop 138.92
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.38
311 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.31
312 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.33
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.25
314 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.27
315 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 352.04
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 259.15
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 138.21
318 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 168.46
x
+
TestAddons/parallel/Ingress (158.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-663262 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-663262 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-663262 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [18353055-d3bc-4d56-9040-5d238a7d772c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [18353055-d3bc-4d56-9040-5d238a7d772c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.004634039s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-663262 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-663262 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.745464927s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-663262 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-663262 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.252
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-663262 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-663262 addons disable ingress-dns --alsologtostderr -v=1: (1.977768395s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-663262 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-663262 addons disable ingress --alsologtostderr -v=1: (7.994271947s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-663262 -n addons-663262
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-663262 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-663262 logs -n 25: (1.378291403s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-361110                                                                     | download-only-361110 | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC | 30 Jan 24 19:25 UTC |
	| delete  | -p download-only-311980                                                                     | download-only-311980 | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC | 30 Jan 24 19:25 UTC |
	| delete  | -p download-only-119193                                                                     | download-only-119193 | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC | 30 Jan 24 19:25 UTC |
	| delete  | -p download-only-361110                                                                     | download-only-361110 | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC | 30 Jan 24 19:25 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-533773 | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC |                     |
	|         | binary-mirror-533773                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38167                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-533773                                                                     | binary-mirror-533773 | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC | 30 Jan 24 19:25 UTC |
	| addons  | enable dashboard -p                                                                         | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC |                     |
	|         | addons-663262                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC |                     |
	|         | addons-663262                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-663262 --wait=true                                                                | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC | 30 Jan 24 19:28 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:28 UTC | 30 Jan 24 19:28 UTC |
	|         | -p addons-663262                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:28 UTC | 30 Jan 24 19:28 UTC |
	|         | addons-663262                                                                               |                      |         |         |                     |                     |
	| addons  | addons-663262 addons                                                                        | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:28 UTC | 30 Jan 24 19:28 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-663262 ip                                                                            | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:28 UTC | 30 Jan 24 19:28 UTC |
	| addons  | addons-663262 addons disable                                                                | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:28 UTC | 30 Jan 24 19:29 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:28 UTC | 30 Jan 24 19:29 UTC |
	|         | addons-663262                                                                               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:29 UTC | 30 Jan 24 19:29 UTC |
	|         | -p addons-663262                                                                            |                      |         |         |                     |                     |
	| addons  | addons-663262 addons disable                                                                | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:29 UTC | 30 Jan 24 19:29 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-663262 ssh curl -s                                                                   | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:29 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-663262 ssh cat                                                                       | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:29 UTC | 30 Jan 24 19:29 UTC |
	|         | /opt/local-path-provisioner/pvc-47ecd82a-1437-4c50-a51d-f453d83df9f5_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-663262 addons disable                                                                | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:29 UTC | 30 Jan 24 19:29 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-663262 addons                                                                        | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:29 UTC | 30 Jan 24 19:29 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-663262 addons                                                                        | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:29 UTC | 30 Jan 24 19:29 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-663262 ip                                                                            | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:31 UTC | 30 Jan 24 19:31 UTC |
	| addons  | addons-663262 addons disable                                                                | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:31 UTC | 30 Jan 24 19:31 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-663262 addons disable                                                                | addons-663262        | jenkins | v1.32.0 | 30 Jan 24 19:31 UTC | 30 Jan 24 19:31 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 19:25:03
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 19:25:03.314454   12691 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:25:03.314620   12691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:25:03.314629   12691 out.go:309] Setting ErrFile to fd 2...
	I0130 19:25:03.314637   12691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:25:03.314842   12691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 19:25:03.315486   12691 out.go:303] Setting JSON to false
	I0130 19:25:03.316295   12691 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":449,"bootTime":1706642255,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 19:25:03.316355   12691 start.go:138] virtualization: kvm guest
	I0130 19:25:03.318443   12691 out.go:177] * [addons-663262] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 19:25:03.319677   12691 out.go:177]   - MINIKUBE_LOCATION=18007
	I0130 19:25:03.320983   12691 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 19:25:03.319692   12691 notify.go:220] Checking for updates...
	I0130 19:25:03.323349   12691 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 19:25:03.324489   12691 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 19:25:03.325563   12691 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 19:25:03.326680   12691 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 19:25:03.327886   12691 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 19:25:03.359403   12691 out.go:177] * Using the kvm2 driver based on user configuration
	I0130 19:25:03.360595   12691 start.go:298] selected driver: kvm2
	I0130 19:25:03.360605   12691 start.go:902] validating driver "kvm2" against <nil>
	I0130 19:25:03.360618   12691 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 19:25:03.361279   12691 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:25:03.361372   12691 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18007-4458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 19:25:03.375987   12691 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 19:25:03.376043   12691 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0130 19:25:03.376277   12691 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 19:25:03.376340   12691 cni.go:84] Creating CNI manager for ""
	I0130 19:25:03.376356   12691 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 19:25:03.376367   12691 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0130 19:25:03.376375   12691 start_flags.go:321] config:
	{Name:addons-663262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-663262 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 19:25:03.376538   12691 iso.go:125] acquiring lock: {Name:mk072ab123730f3058e85a91672f85e887bd47af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:25:03.378517   12691 out.go:177] * Starting control plane node addons-663262 in cluster addons-663262
	I0130 19:25:03.379737   12691 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 19:25:03.379768   12691 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0130 19:25:03.379780   12691 cache.go:56] Caching tarball of preloaded images
	I0130 19:25:03.379862   12691 preload.go:174] Found /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 19:25:03.379877   12691 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0130 19:25:03.380224   12691 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/config.json ...
	I0130 19:25:03.380248   12691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/config.json: {Name:mkd21b8af4f6f2d914fa14eb1617353ae23a2770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:03.380395   12691 start.go:365] acquiring machines lock for addons-663262: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 19:25:03.380453   12691 start.go:369] acquired machines lock for "addons-663262" in 41.468µs
	I0130 19:25:03.380476   12691 start.go:93] Provisioning new machine with config: &{Name:addons-663262 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-663262 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 19:25:03.380548   12691 start.go:125] createHost starting for "" (driver="kvm2")
	I0130 19:25:03.382048   12691 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0130 19:25:03.382203   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:25:03.382241   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:25:03.396271   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40269
	I0130 19:25:03.396660   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:25:03.397149   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:25:03.397170   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:25:03.397494   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:25:03.397644   12691 main.go:141] libmachine: (addons-663262) Calling .GetMachineName
	I0130 19:25:03.397782   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:25:03.397943   12691 start.go:159] libmachine.API.Create for "addons-663262" (driver="kvm2")
	I0130 19:25:03.398004   12691 client.go:168] LocalClient.Create starting
	I0130 19:25:03.398046   12691 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem
	I0130 19:25:03.467139   12691 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem
	I0130 19:25:03.730434   12691 main.go:141] libmachine: Running pre-create checks...
	I0130 19:25:03.730461   12691 main.go:141] libmachine: (addons-663262) Calling .PreCreateCheck
	I0130 19:25:03.730931   12691 main.go:141] libmachine: (addons-663262) Calling .GetConfigRaw
	I0130 19:25:03.731387   12691 main.go:141] libmachine: Creating machine...
	I0130 19:25:03.731409   12691 main.go:141] libmachine: (addons-663262) Calling .Create
	I0130 19:25:03.731554   12691 main.go:141] libmachine: (addons-663262) Creating KVM machine...
	I0130 19:25:03.732653   12691 main.go:141] libmachine: (addons-663262) DBG | found existing default KVM network
	I0130 19:25:03.733312   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:03.733158   12713 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001b3900}
	I0130 19:25:03.738759   12691 main.go:141] libmachine: (addons-663262) DBG | trying to create private KVM network mk-addons-663262 192.168.39.0/24...
	I0130 19:25:03.803528   12691 main.go:141] libmachine: (addons-663262) DBG | private KVM network mk-addons-663262 192.168.39.0/24 created
	I0130 19:25:03.803559   12691 main.go:141] libmachine: (addons-663262) Setting up store path in /home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262 ...
	I0130 19:25:03.803573   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:03.803467   12713 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 19:25:03.803593   12691 main.go:141] libmachine: (addons-663262) Building disk image from file:///home/jenkins/minikube-integration/18007-4458/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0130 19:25:03.803614   12691 main.go:141] libmachine: (addons-663262) Downloading /home/jenkins/minikube-integration/18007-4458/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18007-4458/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0130 19:25:04.021120   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:04.021008   12713 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa...
	I0130 19:25:04.122641   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:04.122490   12713 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/addons-663262.rawdisk...
	I0130 19:25:04.122683   12691 main.go:141] libmachine: (addons-663262) DBG | Writing magic tar header
	I0130 19:25:04.122705   12691 main.go:141] libmachine: (addons-663262) DBG | Writing SSH key tar header
	I0130 19:25:04.122727   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:04.122639   12713 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262 ...
	I0130 19:25:04.122780   12691 main.go:141] libmachine: (addons-663262) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262
	I0130 19:25:04.122827   12691 main.go:141] libmachine: (addons-663262) Setting executable bit set on /home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262 (perms=drwx------)
	I0130 19:25:04.122844   12691 main.go:141] libmachine: (addons-663262) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18007-4458/.minikube/machines
	I0130 19:25:04.122857   12691 main.go:141] libmachine: (addons-663262) Setting executable bit set on /home/jenkins/minikube-integration/18007-4458/.minikube/machines (perms=drwxr-xr-x)
	I0130 19:25:04.122876   12691 main.go:141] libmachine: (addons-663262) Setting executable bit set on /home/jenkins/minikube-integration/18007-4458/.minikube (perms=drwxr-xr-x)
	I0130 19:25:04.122889   12691 main.go:141] libmachine: (addons-663262) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 19:25:04.122897   12691 main.go:141] libmachine: (addons-663262) Setting executable bit set on /home/jenkins/minikube-integration/18007-4458 (perms=drwxrwxr-x)
	I0130 19:25:04.122910   12691 main.go:141] libmachine: (addons-663262) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0130 19:25:04.122917   12691 main.go:141] libmachine: (addons-663262) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0130 19:25:04.122925   12691 main.go:141] libmachine: (addons-663262) Creating domain...
	I0130 19:25:04.122935   12691 main.go:141] libmachine: (addons-663262) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18007-4458
	I0130 19:25:04.122991   12691 main.go:141] libmachine: (addons-663262) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0130 19:25:04.123010   12691 main.go:141] libmachine: (addons-663262) DBG | Checking permissions on dir: /home/jenkins
	I0130 19:25:04.123022   12691 main.go:141] libmachine: (addons-663262) DBG | Checking permissions on dir: /home
	I0130 19:25:04.123039   12691 main.go:141] libmachine: (addons-663262) DBG | Skipping /home - not owner
	I0130 19:25:04.123934   12691 main.go:141] libmachine: (addons-663262) define libvirt domain using xml: 
	I0130 19:25:04.123964   12691 main.go:141] libmachine: (addons-663262) <domain type='kvm'>
	I0130 19:25:04.123976   12691 main.go:141] libmachine: (addons-663262)   <name>addons-663262</name>
	I0130 19:25:04.123987   12691 main.go:141] libmachine: (addons-663262)   <memory unit='MiB'>4000</memory>
	I0130 19:25:04.123997   12691 main.go:141] libmachine: (addons-663262)   <vcpu>2</vcpu>
	I0130 19:25:04.124005   12691 main.go:141] libmachine: (addons-663262)   <features>
	I0130 19:25:04.124011   12691 main.go:141] libmachine: (addons-663262)     <acpi/>
	I0130 19:25:04.124018   12691 main.go:141] libmachine: (addons-663262)     <apic/>
	I0130 19:25:04.124024   12691 main.go:141] libmachine: (addons-663262)     <pae/>
	I0130 19:25:04.124031   12691 main.go:141] libmachine: (addons-663262)     
	I0130 19:25:04.124037   12691 main.go:141] libmachine: (addons-663262)   </features>
	I0130 19:25:04.124045   12691 main.go:141] libmachine: (addons-663262)   <cpu mode='host-passthrough'>
	I0130 19:25:04.124085   12691 main.go:141] libmachine: (addons-663262)   
	I0130 19:25:04.124108   12691 main.go:141] libmachine: (addons-663262)   </cpu>
	I0130 19:25:04.124128   12691 main.go:141] libmachine: (addons-663262)   <os>
	I0130 19:25:04.124146   12691 main.go:141] libmachine: (addons-663262)     <type>hvm</type>
	I0130 19:25:04.124167   12691 main.go:141] libmachine: (addons-663262)     <boot dev='cdrom'/>
	I0130 19:25:04.124176   12691 main.go:141] libmachine: (addons-663262)     <boot dev='hd'/>
	I0130 19:25:04.124182   12691 main.go:141] libmachine: (addons-663262)     <bootmenu enable='no'/>
	I0130 19:25:04.124196   12691 main.go:141] libmachine: (addons-663262)   </os>
	I0130 19:25:04.124208   12691 main.go:141] libmachine: (addons-663262)   <devices>
	I0130 19:25:04.124223   12691 main.go:141] libmachine: (addons-663262)     <disk type='file' device='cdrom'>
	I0130 19:25:04.124240   12691 main.go:141] libmachine: (addons-663262)       <source file='/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/boot2docker.iso'/>
	I0130 19:25:04.124254   12691 main.go:141] libmachine: (addons-663262)       <target dev='hdc' bus='scsi'/>
	I0130 19:25:04.124271   12691 main.go:141] libmachine: (addons-663262)       <readonly/>
	I0130 19:25:04.124279   12691 main.go:141] libmachine: (addons-663262)     </disk>
	I0130 19:25:04.124288   12691 main.go:141] libmachine: (addons-663262)     <disk type='file' device='disk'>
	I0130 19:25:04.124322   12691 main.go:141] libmachine: (addons-663262)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0130 19:25:04.124350   12691 main.go:141] libmachine: (addons-663262)       <source file='/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/addons-663262.rawdisk'/>
	I0130 19:25:04.124376   12691 main.go:141] libmachine: (addons-663262)       <target dev='hda' bus='virtio'/>
	I0130 19:25:04.124388   12691 main.go:141] libmachine: (addons-663262)     </disk>
	I0130 19:25:04.124401   12691 main.go:141] libmachine: (addons-663262)     <interface type='network'>
	I0130 19:25:04.124412   12691 main.go:141] libmachine: (addons-663262)       <source network='mk-addons-663262'/>
	I0130 19:25:04.124427   12691 main.go:141] libmachine: (addons-663262)       <model type='virtio'/>
	I0130 19:25:04.124444   12691 main.go:141] libmachine: (addons-663262)     </interface>
	I0130 19:25:04.124458   12691 main.go:141] libmachine: (addons-663262)     <interface type='network'>
	I0130 19:25:04.124470   12691 main.go:141] libmachine: (addons-663262)       <source network='default'/>
	I0130 19:25:04.124483   12691 main.go:141] libmachine: (addons-663262)       <model type='virtio'/>
	I0130 19:25:04.124494   12691 main.go:141] libmachine: (addons-663262)     </interface>
	I0130 19:25:04.124507   12691 main.go:141] libmachine: (addons-663262)     <serial type='pty'>
	I0130 19:25:04.124515   12691 main.go:141] libmachine: (addons-663262)       <target port='0'/>
	I0130 19:25:04.124531   12691 main.go:141] libmachine: (addons-663262)     </serial>
	I0130 19:25:04.124544   12691 main.go:141] libmachine: (addons-663262)     <console type='pty'>
	I0130 19:25:04.124556   12691 main.go:141] libmachine: (addons-663262)       <target type='serial' port='0'/>
	I0130 19:25:04.124571   12691 main.go:141] libmachine: (addons-663262)     </console>
	I0130 19:25:04.124584   12691 main.go:141] libmachine: (addons-663262)     <rng model='virtio'>
	I0130 19:25:04.124596   12691 main.go:141] libmachine: (addons-663262)       <backend model='random'>/dev/random</backend>
	I0130 19:25:04.124609   12691 main.go:141] libmachine: (addons-663262)     </rng>
	I0130 19:25:04.124617   12691 main.go:141] libmachine: (addons-663262)     
	I0130 19:25:04.124625   12691 main.go:141] libmachine: (addons-663262)     
	I0130 19:25:04.124641   12691 main.go:141] libmachine: (addons-663262)   </devices>
	I0130 19:25:04.124653   12691 main.go:141] libmachine: (addons-663262) </domain>
	I0130 19:25:04.124664   12691 main.go:141] libmachine: (addons-663262) 
	I0130 19:25:04.129957   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:a3:d6:35 in network default
	I0130 19:25:04.130433   12691 main.go:141] libmachine: (addons-663262) Ensuring networks are active...
	I0130 19:25:04.130457   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:04.130995   12691 main.go:141] libmachine: (addons-663262) Ensuring network default is active
	I0130 19:25:04.131231   12691 main.go:141] libmachine: (addons-663262) Ensuring network mk-addons-663262 is active
	I0130 19:25:04.131624   12691 main.go:141] libmachine: (addons-663262) Getting domain xml...
	I0130 19:25:04.132146   12691 main.go:141] libmachine: (addons-663262) Creating domain...
	I0130 19:25:05.402456   12691 main.go:141] libmachine: (addons-663262) Waiting to get IP...
	I0130 19:25:05.403225   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:05.403612   12691 main.go:141] libmachine: (addons-663262) DBG | unable to find current IP address of domain addons-663262 in network mk-addons-663262
	I0130 19:25:05.403636   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:05.403579   12713 retry.go:31] will retry after 251.250558ms: waiting for machine to come up
	I0130 19:25:05.656026   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:05.656471   12691 main.go:141] libmachine: (addons-663262) DBG | unable to find current IP address of domain addons-663262 in network mk-addons-663262
	I0130 19:25:05.656499   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:05.656423   12713 retry.go:31] will retry after 387.161181ms: waiting for machine to come up
	I0130 19:25:06.044820   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:06.045212   12691 main.go:141] libmachine: (addons-663262) DBG | unable to find current IP address of domain addons-663262 in network mk-addons-663262
	I0130 19:25:06.045249   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:06.045174   12713 retry.go:31] will retry after 333.497624ms: waiting for machine to come up
	I0130 19:25:06.380622   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:06.381084   12691 main.go:141] libmachine: (addons-663262) DBG | unable to find current IP address of domain addons-663262 in network mk-addons-663262
	I0130 19:25:06.381129   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:06.381045   12713 retry.go:31] will retry after 447.782853ms: waiting for machine to come up
	I0130 19:25:06.830651   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:06.831090   12691 main.go:141] libmachine: (addons-663262) DBG | unable to find current IP address of domain addons-663262 in network mk-addons-663262
	I0130 19:25:06.831120   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:06.831053   12713 retry.go:31] will retry after 660.348825ms: waiting for machine to come up
	I0130 19:25:07.492690   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:07.493104   12691 main.go:141] libmachine: (addons-663262) DBG | unable to find current IP address of domain addons-663262 in network mk-addons-663262
	I0130 19:25:07.493126   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:07.493084   12713 retry.go:31] will retry after 790.822316ms: waiting for machine to come up
	I0130 19:25:08.285041   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:08.285380   12691 main.go:141] libmachine: (addons-663262) DBG | unable to find current IP address of domain addons-663262 in network mk-addons-663262
	I0130 19:25:08.285408   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:08.285315   12713 retry.go:31] will retry after 717.285337ms: waiting for machine to come up
	I0130 19:25:09.003596   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:09.003936   12691 main.go:141] libmachine: (addons-663262) DBG | unable to find current IP address of domain addons-663262 in network mk-addons-663262
	I0130 19:25:09.003958   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:09.003872   12713 retry.go:31] will retry after 1.222437103s: waiting for machine to come up
	I0130 19:25:10.228136   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:10.228525   12691 main.go:141] libmachine: (addons-663262) DBG | unable to find current IP address of domain addons-663262 in network mk-addons-663262
	I0130 19:25:10.228554   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:10.228478   12713 retry.go:31] will retry after 1.789110331s: waiting for machine to come up
	I0130 19:25:12.020344   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:12.020725   12691 main.go:141] libmachine: (addons-663262) DBG | unable to find current IP address of domain addons-663262 in network mk-addons-663262
	I0130 19:25:12.020747   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:12.020683   12713 retry.go:31] will retry after 1.850027213s: waiting for machine to come up
	I0130 19:25:13.872542   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:13.872921   12691 main.go:141] libmachine: (addons-663262) DBG | unable to find current IP address of domain addons-663262 in network mk-addons-663262
	I0130 19:25:13.872951   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:13.872880   12713 retry.go:31] will retry after 1.87910325s: waiting for machine to come up
	I0130 19:25:15.753855   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:15.754185   12691 main.go:141] libmachine: (addons-663262) DBG | unable to find current IP address of domain addons-663262 in network mk-addons-663262
	I0130 19:25:15.754220   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:15.754163   12713 retry.go:31] will retry after 3.068843454s: waiting for machine to come up
	I0130 19:25:18.825147   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:18.825573   12691 main.go:141] libmachine: (addons-663262) DBG | unable to find current IP address of domain addons-663262 in network mk-addons-663262
	I0130 19:25:18.825598   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:18.825527   12713 retry.go:31] will retry after 2.964668879s: waiting for machine to come up
	I0130 19:25:21.793490   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:21.793872   12691 main.go:141] libmachine: (addons-663262) DBG | unable to find current IP address of domain addons-663262 in network mk-addons-663262
	I0130 19:25:21.793901   12691 main.go:141] libmachine: (addons-663262) DBG | I0130 19:25:21.793829   12713 retry.go:31] will retry after 5.092762462s: waiting for machine to come up
	I0130 19:25:26.890713   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:26.891127   12691 main.go:141] libmachine: (addons-663262) Found IP for machine: 192.168.39.252
	I0130 19:25:26.891151   12691 main.go:141] libmachine: (addons-663262) Reserving static IP address...
	I0130 19:25:26.891179   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has current primary IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:26.891463   12691 main.go:141] libmachine: (addons-663262) DBG | unable to find host DHCP lease matching {name: "addons-663262", mac: "52:54:00:d6:3b:b9", ip: "192.168.39.252"} in network mk-addons-663262
	I0130 19:25:26.958964   12691 main.go:141] libmachine: (addons-663262) DBG | Getting to WaitForSSH function...
	I0130 19:25:26.958997   12691 main.go:141] libmachine: (addons-663262) Reserved static IP address: 192.168.39.252
	I0130 19:25:26.959043   12691 main.go:141] libmachine: (addons-663262) Waiting for SSH to be available...
	I0130 19:25:26.961413   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:26.961750   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:25:26.961794   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:26.961957   12691 main.go:141] libmachine: (addons-663262) DBG | Using SSH client type: external
	I0130 19:25:26.961983   12691 main.go:141] libmachine: (addons-663262) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa (-rw-------)
	I0130 19:25:26.962033   12691 main.go:141] libmachine: (addons-663262) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.252 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 19:25:26.962057   12691 main.go:141] libmachine: (addons-663262) DBG | About to run SSH command:
	I0130 19:25:26.962070   12691 main.go:141] libmachine: (addons-663262) DBG | exit 0
	I0130 19:25:27.054502   12691 main.go:141] libmachine: (addons-663262) DBG | SSH cmd err, output: <nil>: 
	I0130 19:25:27.054823   12691 main.go:141] libmachine: (addons-663262) KVM machine creation complete!
	I0130 19:25:27.055060   12691 main.go:141] libmachine: (addons-663262) Calling .GetConfigRaw
	I0130 19:25:27.055537   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:25:27.055702   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:25:27.055806   12691 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0130 19:25:27.055818   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:25:27.056992   12691 main.go:141] libmachine: Detecting operating system of created instance...
	I0130 19:25:27.057010   12691 main.go:141] libmachine: Waiting for SSH to be available...
	I0130 19:25:27.057020   12691 main.go:141] libmachine: Getting to WaitForSSH function...
	I0130 19:25:27.057030   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:25:27.058912   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:27.059260   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:25:27.059300   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:27.059426   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:25:27.059606   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:25:27.059746   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:25:27.059865   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:25:27.060001   12691 main.go:141] libmachine: Using SSH client type: native
	I0130 19:25:27.060366   12691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0130 19:25:27.060386   12691 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0130 19:25:27.170066   12691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 19:25:27.170088   12691 main.go:141] libmachine: Detecting the provisioner...
	I0130 19:25:27.170096   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:25:27.172667   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:27.173014   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:25:27.173046   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:27.173177   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:25:27.173378   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:25:27.173539   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:25:27.173655   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:25:27.173782   12691 main.go:141] libmachine: Using SSH client type: native
	I0130 19:25:27.174080   12691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0130 19:25:27.174091   12691 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0130 19:25:27.283676   12691 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0130 19:25:27.283731   12691 main.go:141] libmachine: found compatible host: buildroot
	I0130 19:25:27.283737   12691 main.go:141] libmachine: Provisioning with buildroot...
	I0130 19:25:27.283745   12691 main.go:141] libmachine: (addons-663262) Calling .GetMachineName
	I0130 19:25:27.284004   12691 buildroot.go:166] provisioning hostname "addons-663262"
	I0130 19:25:27.284031   12691 main.go:141] libmachine: (addons-663262) Calling .GetMachineName
	I0130 19:25:27.284196   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:25:27.286580   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:27.286902   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:25:27.286933   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:27.287061   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:25:27.287242   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:25:27.287410   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:25:27.287533   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:25:27.287705   12691 main.go:141] libmachine: Using SSH client type: native
	I0130 19:25:27.288064   12691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0130 19:25:27.288081   12691 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-663262 && echo "addons-663262" | sudo tee /etc/hostname
	I0130 19:25:27.405134   12691 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-663262
	
	I0130 19:25:27.405154   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:25:27.407899   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:27.408250   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:25:27.408278   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:27.408467   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:25:27.408636   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:25:27.408787   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:25:27.408921   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:25:27.409056   12691 main.go:141] libmachine: Using SSH client type: native
	I0130 19:25:27.409345   12691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0130 19:25:27.409361   12691 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-663262' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-663262/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-663262' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 19:25:27.525897   12691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 19:25:27.525931   12691 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 19:25:27.525982   12691 buildroot.go:174] setting up certificates
	I0130 19:25:27.525996   12691 provision.go:83] configureAuth start
	I0130 19:25:27.526014   12691 main.go:141] libmachine: (addons-663262) Calling .GetMachineName
	I0130 19:25:27.526260   12691 main.go:141] libmachine: (addons-663262) Calling .GetIP
	I0130 19:25:27.528674   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:27.529052   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:25:27.529084   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:27.529200   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:25:27.531412   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:27.531743   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:25:27.531775   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:27.531901   12691 provision.go:138] copyHostCerts
	I0130 19:25:27.531954   12691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 19:25:27.532075   12691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 19:25:27.532140   12691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 19:25:27.532194   12691 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.addons-663262 san=[192.168.39.252 192.168.39.252 localhost 127.0.0.1 minikube addons-663262]
	I0130 19:25:27.682209   12691 provision.go:172] copyRemoteCerts
	I0130 19:25:27.682305   12691 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 19:25:27.682340   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:25:27.684852   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:27.685167   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:25:27.685199   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:27.685327   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:25:27.685507   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:25:27.685651   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:25:27.685765   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:25:27.768462   12691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 19:25:27.789210   12691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 19:25:27.809855   12691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0130 19:25:27.831029   12691 provision.go:86] duration metric: configureAuth took 305.020362ms
	I0130 19:25:27.831054   12691 buildroot.go:189] setting minikube options for container-runtime
	I0130 19:25:27.831204   12691 config.go:182] Loaded profile config "addons-663262": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:25:27.831311   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:25:27.833861   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:27.834211   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:25:27.834243   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:27.834379   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:25:27.834547   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:25:27.834711   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:25:27.834823   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:25:27.834982   12691 main.go:141] libmachine: Using SSH client type: native
	I0130 19:25:27.835448   12691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0130 19:25:27.835474   12691 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 19:25:28.136161   12691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 19:25:28.136198   12691 main.go:141] libmachine: Checking connection to Docker...
	I0130 19:25:28.136217   12691 main.go:141] libmachine: (addons-663262) Calling .GetURL
	I0130 19:25:28.137234   12691 main.go:141] libmachine: (addons-663262) DBG | Using libvirt version 6000000
	I0130 19:25:28.139289   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:28.139576   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:25:28.139614   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:28.139793   12691 main.go:141] libmachine: Docker is up and running!
	I0130 19:25:28.139812   12691 main.go:141] libmachine: Reticulating splines...
	I0130 19:25:28.139819   12691 client.go:171] LocalClient.Create took 24.741804259s
	I0130 19:25:28.139842   12691 start.go:167] duration metric: libmachine.API.Create for "addons-663262" took 24.741899288s
	I0130 19:25:28.139870   12691 start.go:300] post-start starting for "addons-663262" (driver="kvm2")
	I0130 19:25:28.139885   12691 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 19:25:28.139907   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:25:28.140121   12691 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 19:25:28.140144   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:25:28.141954   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:28.142290   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:25:28.142316   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:28.142472   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:25:28.142644   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:25:28.142809   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:25:28.142934   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:25:28.227173   12691 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 19:25:28.230916   12691 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 19:25:28.230934   12691 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 19:25:28.230988   12691 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 19:25:28.231012   12691 start.go:303] post-start completed in 91.129892ms
	I0130 19:25:28.231036   12691 main.go:141] libmachine: (addons-663262) Calling .GetConfigRaw
	I0130 19:25:28.231606   12691 main.go:141] libmachine: (addons-663262) Calling .GetIP
	I0130 19:25:28.233890   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:28.234271   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:25:28.234300   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:28.234489   12691 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/config.json ...
	I0130 19:25:28.234664   12691 start.go:128] duration metric: createHost completed in 24.854105013s
	I0130 19:25:28.234688   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:25:28.236690   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:28.236971   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:25:28.237006   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:28.237102   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:25:28.237258   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:25:28.237452   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:25:28.237593   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:25:28.237728   12691 main.go:141] libmachine: Using SSH client type: native
	I0130 19:25:28.238080   12691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0130 19:25:28.238092   12691 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 19:25:28.347644   12691 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706642728.329211402
	
	I0130 19:25:28.347667   12691 fix.go:206] guest clock: 1706642728.329211402
	I0130 19:25:28.347677   12691 fix.go:219] Guest: 2024-01-30 19:25:28.329211402 +0000 UTC Remote: 2024-01-30 19:25:28.234676565 +0000 UTC m=+24.966971035 (delta=94.534837ms)
	I0130 19:25:28.347703   12691 fix.go:190] guest clock delta is within tolerance: 94.534837ms
	I0130 19:25:28.347714   12691 start.go:83] releasing machines lock for "addons-663262", held for 24.9672485s
	I0130 19:25:28.347756   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:25:28.347994   12691 main.go:141] libmachine: (addons-663262) Calling .GetIP
	I0130 19:25:28.350521   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:28.350807   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:25:28.350841   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:28.350970   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:25:28.351536   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:25:28.351705   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:25:28.351794   12691 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 19:25:28.351836   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:25:28.351911   12691 ssh_runner.go:195] Run: cat /version.json
	I0130 19:25:28.351935   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:25:28.354445   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:28.354648   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:28.354766   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:25:28.354789   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:28.354934   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:25:28.355064   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:25:28.355091   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:25:28.355095   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:28.355237   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:25:28.355295   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:25:28.355363   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:25:28.355453   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:25:28.355512   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:25:28.355628   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:25:28.460088   12691 ssh_runner.go:195] Run: systemctl --version
	I0130 19:25:28.465428   12691 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 19:25:28.633216   12691 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 19:25:28.639483   12691 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 19:25:28.639532   12691 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 19:25:28.653271   12691 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 19:25:28.653285   12691 start.go:475] detecting cgroup driver to use...
	I0130 19:25:28.653333   12691 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 19:25:28.668012   12691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 19:25:28.679145   12691 docker.go:217] disabling cri-docker service (if available) ...
	I0130 19:25:28.679186   12691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 19:25:28.690044   12691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 19:25:28.701218   12691 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 19:25:28.800246   12691 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 19:25:28.919919   12691 docker.go:233] disabling docker service ...
	I0130 19:25:28.919987   12691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 19:25:28.933491   12691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 19:25:28.944132   12691 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 19:25:29.055163   12691 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 19:25:29.155977   12691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 19:25:29.167254   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 19:25:29.183248   12691 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 19:25:29.183315   12691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 19:25:29.191875   12691 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 19:25:29.191929   12691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 19:25:29.200344   12691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 19:25:29.208582   12691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 19:25:29.216884   12691 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 19:25:29.225439   12691 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 19:25:29.232817   12691 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 19:25:29.232855   12691 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 19:25:29.245364   12691 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 19:25:29.252894   12691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 19:25:29.351680   12691 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 19:25:29.516899   12691 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 19:25:29.516989   12691 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 19:25:29.521474   12691 start.go:543] Will wait 60s for crictl version
	I0130 19:25:29.521537   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:25:29.524856   12691 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 19:25:29.558678   12691 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 19:25:29.558815   12691 ssh_runner.go:195] Run: crio --version
	I0130 19:25:29.603126   12691 ssh_runner.go:195] Run: crio --version
	I0130 19:25:29.649762   12691 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 19:25:29.651066   12691 main.go:141] libmachine: (addons-663262) Calling .GetIP
	I0130 19:25:29.653313   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:29.653611   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:25:29.653632   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:25:29.653863   12691 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 19:25:29.657563   12691 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 19:25:29.669705   12691 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 19:25:29.669759   12691 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 19:25:29.700803   12691 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 19:25:29.700865   12691 ssh_runner.go:195] Run: which lz4
	I0130 19:25:29.704424   12691 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 19:25:29.708096   12691 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 19:25:29.708121   12691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 19:25:31.479252   12691 crio.go:444] Took 1.774855 seconds to copy over tarball
	I0130 19:25:31.479313   12691 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 19:25:34.448658   12691 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.969321574s)
	I0130 19:25:34.448691   12691 crio.go:451] Took 2.969410 seconds to extract the tarball
	I0130 19:25:34.448702   12691 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 19:25:34.488152   12691 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 19:25:34.557735   12691 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 19:25:34.557756   12691 cache_images.go:84] Images are preloaded, skipping loading
	I0130 19:25:34.557811   12691 ssh_runner.go:195] Run: crio config
	I0130 19:25:34.613385   12691 cni.go:84] Creating CNI manager for ""
	I0130 19:25:34.613416   12691 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 19:25:34.613439   12691 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 19:25:34.613466   12691 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.252 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-663262 NodeName:addons-663262 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.252"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.252 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 19:25:34.613668   12691 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.252
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-663262"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.252
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.252"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 19:25:34.613753   12691 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-663262 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.252
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-663262 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 19:25:34.613811   12691 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 19:25:34.622186   12691 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 19:25:34.622236   12691 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 19:25:34.629810   12691 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0130 19:25:34.644483   12691 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 19:25:34.658731   12691 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0130 19:25:34.672968   12691 ssh_runner.go:195] Run: grep 192.168.39.252	control-plane.minikube.internal$ /etc/hosts
	I0130 19:25:34.676507   12691 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.252	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 19:25:34.689297   12691 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262 for IP: 192.168.39.252
	I0130 19:25:34.689316   12691 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:34.689441   12691 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 19:25:34.803698   12691 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt ...
	I0130 19:25:34.803722   12691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt: {Name:mk98170174a2cbd4cd7091b09f1dbe4c06f12a69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:34.803889   12691 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key ...
	I0130 19:25:34.803912   12691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key: {Name:mk6175b3ed25ec826292506fec2c919c6f6c61ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:34.804004   12691 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 19:25:34.883369   12691 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt ...
	I0130 19:25:34.883395   12691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt: {Name:mk5890581534800b11fe6bc90e351d2652b80606 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:34.883535   12691 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key ...
	I0130 19:25:34.883549   12691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key: {Name:mk29f74f5a4ef9053e5f8598eee744cff52fd448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:34.883664   12691 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.key
	I0130 19:25:34.883682   12691 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt with IP's: []
	I0130 19:25:35.064970   12691 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt ...
	I0130 19:25:35.065004   12691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: {Name:mk03671c8f4f1a7142dd97ba7dad0a645f68d862 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:35.065171   12691 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.key ...
	I0130 19:25:35.065185   12691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.key: {Name:mkcc7b6d924b69d114f9a55074ea35c910a6c90f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:35.065273   12691 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/apiserver.key.ba3365be
	I0130 19:25:35.065295   12691 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/apiserver.crt.ba3365be with IP's: [192.168.39.252 10.96.0.1 127.0.0.1 10.0.0.1]
	I0130 19:25:35.146378   12691 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/apiserver.crt.ba3365be ...
	I0130 19:25:35.146408   12691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/apiserver.crt.ba3365be: {Name:mk8b83190dfc1e360191d0f036d72031bb1cedf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:35.146568   12691 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/apiserver.key.ba3365be ...
	I0130 19:25:35.146583   12691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/apiserver.key.ba3365be: {Name:mk72b2ef1542cd5d499d8434a1028545d510171a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:35.146672   12691 certs.go:337] copying /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/apiserver.crt.ba3365be -> /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/apiserver.crt
	I0130 19:25:35.146766   12691 certs.go:341] copying /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/apiserver.key.ba3365be -> /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/apiserver.key
	I0130 19:25:35.146836   12691 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/proxy-client.key
	I0130 19:25:35.146858   12691 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/proxy-client.crt with IP's: []
	I0130 19:25:35.252959   12691 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/proxy-client.crt ...
	I0130 19:25:35.252998   12691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/proxy-client.crt: {Name:mk2589f5e7405f29eea8a64948da6fbdf47429b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:35.253150   12691 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/proxy-client.key ...
	I0130 19:25:35.253163   12691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/proxy-client.key: {Name:mk96270b122c3cffc9c07b593398804cc058aa99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:35.253337   12691 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 19:25:35.253382   12691 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 19:25:35.253418   12691 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 19:25:35.253454   12691 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 19:25:35.254000   12691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 19:25:35.284761   12691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 19:25:35.310311   12691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 19:25:35.335088   12691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 19:25:35.359605   12691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 19:25:35.381567   12691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 19:25:35.402951   12691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 19:25:35.424296   12691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 19:25:35.445819   12691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 19:25:35.467471   12691 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 19:25:35.482691   12691 ssh_runner.go:195] Run: openssl version
	I0130 19:25:35.488035   12691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 19:25:35.497626   12691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 19:25:35.502034   12691 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 19:25:35.502085   12691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 19:25:35.507248   12691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 19:25:35.516112   12691 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 19:25:35.520028   12691 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0130 19:25:35.520067   12691 kubeadm.go:404] StartCluster: {Name:addons-663262 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-663262 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 19:25:35.520124   12691 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 19:25:35.520156   12691 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 19:25:35.554825   12691 cri.go:89] found id: ""
	I0130 19:25:35.554897   12691 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 19:25:35.563087   12691 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 19:25:35.570969   12691 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 19:25:35.578631   12691 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 19:25:35.578659   12691 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 19:25:35.769200   12691 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 19:25:47.528077   12691 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0130 19:25:47.528155   12691 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 19:25:47.528232   12691 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 19:25:47.528358   12691 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 19:25:47.528469   12691 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 19:25:47.528558   12691 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 19:25:47.529927   12691 out.go:204]   - Generating certificates and keys ...
	I0130 19:25:47.530015   12691 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 19:25:47.530099   12691 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 19:25:47.530171   12691 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0130 19:25:47.530222   12691 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0130 19:25:47.530271   12691 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0130 19:25:47.530317   12691 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0130 19:25:47.530364   12691 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0130 19:25:47.530479   12691 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-663262 localhost] and IPs [192.168.39.252 127.0.0.1 ::1]
	I0130 19:25:47.530526   12691 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0130 19:25:47.530619   12691 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-663262 localhost] and IPs [192.168.39.252 127.0.0.1 ::1]
	I0130 19:25:47.530676   12691 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0130 19:25:47.530731   12691 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0130 19:25:47.530768   12691 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0130 19:25:47.530816   12691 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 19:25:47.530858   12691 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 19:25:47.530905   12691 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 19:25:47.530959   12691 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 19:25:47.531004   12691 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 19:25:47.531095   12691 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 19:25:47.531188   12691 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 19:25:47.532403   12691 out.go:204]   - Booting up control plane ...
	I0130 19:25:47.532493   12691 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 19:25:47.532570   12691 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 19:25:47.532645   12691 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 19:25:47.532761   12691 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 19:25:47.532868   12691 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 19:25:47.532919   12691 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 19:25:47.533124   12691 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 19:25:47.533198   12691 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502717 seconds
	I0130 19:25:47.533303   12691 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 19:25:47.533430   12691 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 19:25:47.533512   12691 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 19:25:47.533682   12691 kubeadm.go:322] [mark-control-plane] Marking the node addons-663262 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 19:25:47.533759   12691 kubeadm.go:322] [bootstrap-token] Using token: e6sqh0.aslu16z9t5t3ishd
	I0130 19:25:47.534915   12691 out.go:204]   - Configuring RBAC rules ...
	I0130 19:25:47.535021   12691 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 19:25:47.535113   12691 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 19:25:47.535247   12691 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 19:25:47.535385   12691 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 19:25:47.535499   12691 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 19:25:47.535586   12691 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 19:25:47.535723   12691 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 19:25:47.535790   12691 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 19:25:47.535860   12691 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 19:25:47.535870   12691 kubeadm.go:322] 
	I0130 19:25:47.535962   12691 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 19:25:47.535978   12691 kubeadm.go:322] 
	I0130 19:25:47.536057   12691 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 19:25:47.536069   12691 kubeadm.go:322] 
	I0130 19:25:47.536106   12691 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 19:25:47.536213   12691 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 19:25:47.536257   12691 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 19:25:47.536263   12691 kubeadm.go:322] 
	I0130 19:25:47.536305   12691 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 19:25:47.536313   12691 kubeadm.go:322] 
	I0130 19:25:47.536357   12691 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 19:25:47.536363   12691 kubeadm.go:322] 
	I0130 19:25:47.536429   12691 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 19:25:47.536524   12691 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 19:25:47.536617   12691 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 19:25:47.536626   12691 kubeadm.go:322] 
	I0130 19:25:47.536731   12691 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 19:25:47.536797   12691 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 19:25:47.536803   12691 kubeadm.go:322] 
	I0130 19:25:47.536872   12691 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token e6sqh0.aslu16z9t5t3ishd \
	I0130 19:25:47.536958   12691 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 \
	I0130 19:25:47.536977   12691 kubeadm.go:322] 	--control-plane 
	I0130 19:25:47.536989   12691 kubeadm.go:322] 
	I0130 19:25:47.537109   12691 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 19:25:47.537117   12691 kubeadm.go:322] 
	I0130 19:25:47.537199   12691 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token e6sqh0.aslu16z9t5t3ishd \
	I0130 19:25:47.537290   12691 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 
	I0130 19:25:47.537299   12691 cni.go:84] Creating CNI manager for ""
	I0130 19:25:47.537305   12691 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 19:25:47.538733   12691 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 19:25:47.539924   12691 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 19:25:47.597790   12691 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 19:25:47.686096   12691 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 19:25:47.686192   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:47.686234   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218 minikube.k8s.io/name=addons-663262 minikube.k8s.io/updated_at=2024_01_30T19_25_47_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:47.733160   12691 ops.go:34] apiserver oom_adj: -16
	I0130 19:25:47.919017   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:48.419252   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:48.919317   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:49.419196   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:49.919990   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:50.419395   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:50.919943   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:51.419517   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:51.919328   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:52.419745   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:52.919217   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:53.419492   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:53.920066   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:54.419586   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:54.919193   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:55.419368   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:55.919986   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:56.419830   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:56.919317   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:57.420046   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:57.919824   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:58.419726   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:58.919810   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:59.419469   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:59.919994   12691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:26:00.027203   12691 kubeadm.go:1088] duration metric: took 12.341065251s to wait for elevateKubeSystemPrivileges.
	I0130 19:26:00.027242   12691 kubeadm.go:406] StartCluster complete in 24.50717655s
	I0130 19:26:00.027274   12691 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:26:00.027473   12691 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 19:26:00.028119   12691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:26:00.028358   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 19:26:00.028432   12691 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0130 19:26:00.028527   12691 addons.go:69] Setting cloud-spanner=true in profile "addons-663262"
	I0130 19:26:00.028544   12691 addons.go:69] Setting metrics-server=true in profile "addons-663262"
	I0130 19:26:00.028546   12691 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-663262"
	I0130 19:26:00.028561   12691 addons.go:234] Setting addon metrics-server=true in "addons-663262"
	I0130 19:26:00.028566   12691 addons.go:234] Setting addon cloud-spanner=true in "addons-663262"
	I0130 19:26:00.028575   12691 addons.go:69] Setting storage-provisioner=true in profile "addons-663262"
	I0130 19:26:00.028585   12691 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-663262"
	I0130 19:26:00.028587   12691 addons.go:69] Setting helm-tiller=true in profile "addons-663262"
	I0130 19:26:00.028604   12691 addons.go:69] Setting registry=true in profile "addons-663262"
	I0130 19:26:00.028607   12691 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-663262"
	I0130 19:26:00.028613   12691 addons.go:234] Setting addon helm-tiller=true in "addons-663262"
	I0130 19:26:00.028615   12691 addons.go:234] Setting addon registry=true in "addons-663262"
	I0130 19:26:00.028619   12691 host.go:66] Checking if "addons-663262" exists ...
	I0130 19:26:00.028629   12691 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-663262"
	I0130 19:26:00.028641   12691 host.go:66] Checking if "addons-663262" exists ...
	I0130 19:26:00.028646   12691 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-663262"
	I0130 19:26:00.028536   12691 addons.go:69] Setting yakd=true in profile "addons-663262"
	I0130 19:26:00.028658   12691 host.go:66] Checking if "addons-663262" exists ...
	I0130 19:26:00.028660   12691 addons.go:69] Setting default-storageclass=true in profile "addons-663262"
	I0130 19:26:00.028672   12691 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-663262"
	I0130 19:26:00.028698   12691 addons.go:234] Setting addon yakd=true in "addons-663262"
	I0130 19:26:00.028745   12691 host.go:66] Checking if "addons-663262" exists ...
	I0130 19:26:00.028619   12691 host.go:66] Checking if "addons-663262" exists ...
	I0130 19:26:00.028876   12691 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-663262"
	I0130 19:26:00.028953   12691 host.go:66] Checking if "addons-663262" exists ...
	I0130 19:26:00.029140   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.029151   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.029168   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.029174   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.029181   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.029204   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.029226   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.029252   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.029265   12691 addons.go:69] Setting volumesnapshots=true in profile "addons-663262"
	I0130 19:26:00.029278   12691 addons.go:234] Setting addon volumesnapshots=true in "addons-663262"
	I0130 19:26:00.029313   12691 host.go:66] Checking if "addons-663262" exists ...
	I0130 19:26:00.029364   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.029384   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.029435   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.029464   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.029575   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.029593   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.029621   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.029650   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.028597   12691 addons.go:234] Setting addon storage-provisioner=true in "addons-663262"
	I0130 19:26:00.029717   12691 host.go:66] Checking if "addons-663262" exists ...
	I0130 19:26:00.029764   12691 addons.go:69] Setting ingress=true in profile "addons-663262"
	I0130 19:26:00.029783   12691 addons.go:234] Setting addon ingress=true in "addons-663262"
	I0130 19:26:00.029819   12691 host.go:66] Checking if "addons-663262" exists ...
	I0130 19:26:00.028654   12691 host.go:66] Checking if "addons-663262" exists ...
	I0130 19:26:00.030038   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.030059   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.030096   12691 config.go:182] Loaded profile config "addons-663262": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:26:00.030145   12691 addons.go:69] Setting ingress-dns=true in profile "addons-663262"
	I0130 19:26:00.030159   12691 addons.go:234] Setting addon ingress-dns=true in "addons-663262"
	I0130 19:26:00.030165   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.030182   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.030196   12691 host.go:66] Checking if "addons-663262" exists ...
	I0130 19:26:00.030219   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.030246   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.030549   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.030580   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.028539   12691 addons.go:69] Setting inspektor-gadget=true in profile "addons-663262"
	I0130 19:26:00.030779   12691 addons.go:234] Setting addon inspektor-gadget=true in "addons-663262"
	I0130 19:26:00.030820   12691 host.go:66] Checking if "addons-663262" exists ...
	I0130 19:26:00.030711   12691 addons.go:69] Setting gcp-auth=true in profile "addons-663262"
	I0130 19:26:00.030893   12691 mustload.go:65] Loading cluster: addons-663262
	I0130 19:26:00.050020   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34181
	I0130 19:26:00.050250   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44849
	I0130 19:26:00.050664   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.050704   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.050667   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35841
	I0130 19:26:00.051280   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44405
	I0130 19:26:00.051354   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.051371   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.051385   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.051617   12691 config.go:182] Loaded profile config "addons-663262": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:26:00.051771   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.051783   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.051795   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.051863   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.051880   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.051971   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.052000   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.052218   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.052371   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.052389   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.052612   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.052638   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.053866   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.054263   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.054301   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.054489   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:26:00.054810   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.055345   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.055378   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.066680   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41407
	I0130 19:26:00.066962   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.066977   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.067042   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0130 19:26:00.067392   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.067502   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.067617   12691 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-663262"
	I0130 19:26:00.067654   12691 host.go:66] Checking if "addons-663262" exists ...
	I0130 19:26:00.067995   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.068039   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.068164   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.068199   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.068244   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.068263   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.068334   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.068569   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.068896   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.068922   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.069105   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.069140   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.069209   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.069787   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.069826   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.077642   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42477
	I0130 19:26:00.078045   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.078769   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.078794   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.079097   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.079599   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.079644   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.081151   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34151
	I0130 19:26:00.081470   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.081931   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.081947   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.082324   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.082421   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42571
	I0130 19:26:00.082813   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.082830   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.082859   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.083302   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.083319   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.084227   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.084748   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.084771   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.088452   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42027
	I0130 19:26:00.088635   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37633
	I0130 19:26:00.089020   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.089096   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.089673   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.089690   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.090128   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.090338   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.090358   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.090679   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.090697   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.090699   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.091282   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.091316   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.097140   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46499
	I0130 19:26:00.097554   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.098041   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.098057   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.098391   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.098555   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:26:00.101129   12691 addons.go:234] Setting addon default-storageclass=true in "addons-663262"
	I0130 19:26:00.101169   12691 host.go:66] Checking if "addons-663262" exists ...
	I0130 19:26:00.101545   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.101587   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.107477   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42067
	I0130 19:26:00.107889   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.108319   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.108332   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.108565   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.108709   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:26:00.109299   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45907
	I0130 19:26:00.109769   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.110264   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.110279   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.110672   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.110709   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:26:00.110840   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:26:00.112922   12691 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0130 19:26:00.113145   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:26:00.114461   12691 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 19:26:00.114477   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 19:26:00.114499   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:26:00.115816   12691 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0130 19:26:00.117490   12691 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0130 19:26:00.117505   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0130 19:26:00.117521   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:26:00.117173   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34945
	I0130 19:26:00.118170   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36379
	I0130 19:26:00.118526   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.118725   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.120121   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:26:00.120140   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.120159   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.120307   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.120356   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:26:00.120380   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.120521   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.120581   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:26:00.120795   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:26:00.120860   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.120965   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.120975   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.121155   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:26:00.121679   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:26:00.121709   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.121906   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:26:00.123033   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:26:00.123126   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:26:00.123146   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.123172   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45187
	I0130 19:26:00.123955   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36963
	I0130 19:26:00.124106   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:26:00.124116   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:26:00.124271   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:26:00.124277   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.128361   12691 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0130 19:26:00.124538   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.124589   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:26:00.124597   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:26:00.124671   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.126476   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39829
	I0130 19:26:00.126937   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34027
	I0130 19:26:00.127113   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46777
	I0130 19:26:00.129453   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35873
	I0130 19:26:00.130248   12691 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0130 19:26:00.130259   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0130 19:26:00.130271   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:26:00.130323   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.131545   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39693
	I0130 19:26:00.131550   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.131632   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.131649   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.133106   12691 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0130 19:26:00.131997   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.132022   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.132041   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.132059   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.132594   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.133191   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0130 19:26:00.133552   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.134246   12691 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0130 19:26:00.134262   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0130 19:26:00.134279   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:26:00.133707   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.134313   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.134362   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.135339   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:26:00.135395   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.135418   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.135429   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:26:00.135457   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.135476   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:26:00.135551   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.135571   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.136146   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.136156   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0130 19:26:00.136153   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.136246   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:26:00.136277   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.136295   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:26:00.136304   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.136329   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.136331   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.136365   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.136487   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:26:00.136701   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.136755   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.136819   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:26:00.136879   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.137175   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.137192   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.137409   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.137454   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.137495   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.137561   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:26:00.137986   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.138007   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.138039   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.138063   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:26:00.138068   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.138186   12691 host.go:66] Checking if "addons-663262" exists ...
	I0130 19:26:00.138539   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.138575   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.139247   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.139433   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:26:00.139831   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:26:00.141580   12691 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0130 19:26:00.140417   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:26:00.140453   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41213
	I0130 19:26:00.140996   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:26:00.141202   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.141229   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.141681   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38201
	I0130 19:26:00.141790   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:26:00.143909   12691 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0130 19:26:00.142184   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:26:00.142885   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.142946   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:26:00.143368   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:26:00.143618   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.143737   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.144996   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.147062   12691 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0130 19:26:00.145170   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:26:00.145313   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.145966   12691 out.go:177]   - Using image docker.io/registry:2.8.3
	I0130 19:26:00.145992   12691 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0130 19:26:00.146413   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.146525   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.148417   12691 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0130 19:26:00.149310   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0130 19:26:00.149336   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:26:00.149343   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.148607   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:26:00.149479   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.149800   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:00.149286   12691 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0130 19:26:00.150180   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.150734   12691 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0130 19:26:00.151287   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.153112   12691 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0130 19:26:00.152080   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:00.152250   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:26:00.152423   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.152639   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:26:00.153003   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:26:00.154269   12691 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0130 19:26:00.154375   12691 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0130 19:26:00.155425   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0130 19:26:00.155444   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:26:00.155504   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:26:00.155527   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0130 19:26:00.155532   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.155542   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:26:00.156731   12691 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0130 19:26:00.155663   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:26:00.157617   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:26:00.159736   12691 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0130 19:26:00.159784   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33891
	I0130 19:26:00.159287   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.158371   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:26:00.159953   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:26:00.160006   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:26:00.160498   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0130 19:26:00.161279   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.162021   12691 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0130 19:26:00.163466   12691 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0130 19:26:00.163479   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0130 19:26:00.163491   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:26:00.161365   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.163533   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:26:00.163548   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.161832   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42011
	I0130 19:26:00.161972   12691 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0130 19:26:00.161988   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:26:00.162125   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:26:00.162346   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:26:00.162364   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:26:00.162608   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.162678   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.163945   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.164693   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.166775   12691 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0130 19:26:00.164839   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:26:00.164853   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:26:00.165638   12691 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 19:26:00.168183   12691 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 19:26:00.168196   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 19:26:00.168207   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:26:00.165956   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.166024   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.166135   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.168248   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:26:00.168259   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.168261   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.166462   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.168271   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.166487   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:26:00.170296   12691 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0130 19:26:00.166940   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:26:00.166963   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:26:00.169271   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.169286   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.169302   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:26:00.169312   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.171141   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.172413   12691 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0130 19:26:00.173541   12691 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0130 19:26:00.173555   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0130 19:26:00.173577   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:26:00.171669   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:26:00.171691   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:26:00.171710   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:26:00.171878   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:26:00.171955   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:26:00.172109   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:26:00.174193   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.172239   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:26:00.174367   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:26:00.174414   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:26:00.175058   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:26:00.175248   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43439
	I0130 19:26:00.175378   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:26:00.175867   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:26:00.175989   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:26:00.177465   12691 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0
	I0130 19:26:00.176296   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.177430   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:26:00.178122   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.178616   12691 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0130 19:26:00.178624   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0130 19:26:00.178633   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:26:00.178643   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:26:00.178532   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:26:00.178674   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.178712   12691 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0130 19:26:00.179886   12691 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0130 19:26:00.178894   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:26:00.178961   12691 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 19:26:00.179950   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 19:26:00.179968   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:26:00.179396   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.180014   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.179902   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0130 19:26:00.180049   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:26:00.180203   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:26:00.180327   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:26:00.181091   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.181316   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:26:00.181481   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.181930   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:26:00.181949   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.182302   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:26:00.182449   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:26:00.182568   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:26:00.182678   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:26:00.183742   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:26:00.183868   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.185212   12691 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0130 19:26:00.184236   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:26:00.184339   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.184427   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:26:00.184886   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:26:00.186047   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44225
	I0130 19:26:00.186491   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:26:00.186521   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.187632   12691 out.go:177]   - Using image docker.io/busybox:stable
	I0130 19:26:00.186536   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.186661   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:26:00.186677   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:26:00.186827   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:00.188970   12691 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0130 19:26:00.188981   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0130 19:26:00.188992   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:26:00.189027   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:26:00.189045   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:26:00.189113   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:26:00.189551   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:00.189561   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:26:00.189569   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:00.190186   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:00.190644   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:26:00.191836   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.192201   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:26:00.192356   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:26:00.192364   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:00.192497   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:26:00.192618   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:26:00.192733   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	W0130 19:26:00.220262   12691 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56110->192.168.39.252:22: read: connection reset by peer
	I0130 19:26:00.220285   12691 retry.go:31] will retry after 364.770963ms: ssh: handshake failed: read tcp 192.168.39.1:56110->192.168.39.252:22: read: connection reset by peer
	I0130 19:26:00.375866   12691 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 19:26:00.375883   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0130 19:26:00.376536   12691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0130 19:26:00.409453   12691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0130 19:26:00.424807   12691 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0130 19:26:00.424829   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0130 19:26:00.425563   12691 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0130 19:26:00.425583   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0130 19:26:00.484576   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 19:26:00.580239   12691 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-663262" context rescaled to 1 replicas
	I0130 19:26:00.580275   12691 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 19:26:00.582250   12691 out.go:177] * Verifying Kubernetes components...
	I0130 19:26:00.583386   12691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 19:26:00.665892   12691 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0130 19:26:00.665914   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0130 19:26:00.723454   12691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 19:26:00.917229   12691 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0130 19:26:00.917261   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0130 19:26:00.926224   12691 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 19:26:00.926247   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 19:26:00.983691   12691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0130 19:26:00.987817   12691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0130 19:26:00.993558   12691 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0130 19:26:00.993573   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0130 19:26:01.001419   12691 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0130 19:26:01.001437   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0130 19:26:01.002248   12691 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0130 19:26:01.002266   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0130 19:26:01.004346   12691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 19:26:01.010342   12691 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0130 19:26:01.010359   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0130 19:26:01.041313   12691 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0130 19:26:01.041333   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0130 19:26:01.085791   12691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0130 19:26:01.094325   12691 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0130 19:26:01.094347   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0130 19:26:01.220756   12691 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 19:26:01.220778   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 19:26:01.277295   12691 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0130 19:26:01.277315   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0130 19:26:01.290360   12691 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0130 19:26:01.290377   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0130 19:26:01.321601   12691 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0130 19:26:01.321624   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0130 19:26:01.323105   12691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0130 19:26:01.330824   12691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0130 19:26:01.333404   12691 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0130 19:26:01.333417   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0130 19:26:01.348719   12691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 19:26:01.368059   12691 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0130 19:26:01.368080   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0130 19:26:01.384700   12691 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0130 19:26:01.384725   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0130 19:26:01.417388   12691 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0130 19:26:01.417412   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0130 19:26:01.446312   12691 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0130 19:26:01.446331   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0130 19:26:01.469079   12691 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0130 19:26:01.469097   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0130 19:26:01.542543   12691 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0130 19:26:01.542564   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0130 19:26:01.579919   12691 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0130 19:26:01.579940   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0130 19:26:01.594234   12691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0130 19:26:01.598589   12691 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0130 19:26:01.598607   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0130 19:26:01.648704   12691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0130 19:26:01.671524   12691 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0130 19:26:01.671553   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0130 19:26:01.686403   12691 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0130 19:26:01.686431   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0130 19:26:01.764046   12691 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0130 19:26:01.764069   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0130 19:26:01.780088   12691 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0130 19:26:01.780115   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0130 19:26:01.837954   12691 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0130 19:26:01.837978   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0130 19:26:01.870475   12691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0130 19:26:01.889673   12691 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0130 19:26:01.889690   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0130 19:26:01.951076   12691 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0130 19:26:01.951111   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0130 19:26:02.004569   12691 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0130 19:26:02.004592   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0130 19:26:02.045474   12691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0130 19:26:04.480344   12691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.103769112s)
	I0130 19:26:04.480401   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:04.480416   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:04.480723   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:04.480771   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:04.480792   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:04.480801   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:04.482350   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:04.482767   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:04.482787   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:06.698590   12691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.289101306s)
	I0130 19:26:06.698634   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:06.698590   12691 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.213979791s)
	I0130 19:26:06.698669   12691 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (6.115258688s)
	I0130 19:26:06.698671   12691 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0130 19:26:06.698645   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:06.699081   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:06.699097   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:06.699107   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:06.699115   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:06.699362   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:06.699380   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:06.699689   12691 node_ready.go:35] waiting up to 6m0s for node "addons-663262" to be "Ready" ...
	I0130 19:26:06.924544   12691 node_ready.go:49] node "addons-663262" has status "Ready":"True"
	I0130 19:26:06.924573   12691 node_ready.go:38] duration metric: took 224.848653ms waiting for node "addons-663262" to be "Ready" ...
	I0130 19:26:06.924586   12691 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 19:26:07.108965   12691 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pgmv6" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:07.764072   12691 pod_ready.go:97] pod "coredns-5dd5756b68-pgmv6" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-30 19:26:00 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-30 19:26:00 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-30 19:26:00 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-30 19:26:00 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.252 HostIPs:[] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-01-30 19:26:00 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&Container
StateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-01-30 19:26:05 +0000 UTC,FinishedAt:2024-01-30 19:26:05 +0000 UTC,ContainerID:cri-o://1cd33e34d26767e8d3408ac072beda417966d468377ba53ade034e4195265d36,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://1cd33e34d26767e8d3408ac072beda417966d468377ba53ade034e4195265d36 Started:0xc00374b15c AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0130 19:26:07.764096   12691 pod_ready.go:81] duration metric: took 655.093906ms waiting for pod "coredns-5dd5756b68-pgmv6" in "kube-system" namespace to be "Ready" ...
	E0130 19:26:07.764107   12691 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-pgmv6" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-30 19:26:00 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-30 19:26:00 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-30 19:26:00 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-30 19:26:00 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.252 HostIPs:[] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-01-30 19:26:00 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runn
ing:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-01-30 19:26:05 +0000 UTC,FinishedAt:2024-01-30 19:26:05 +0000 UTC,ContainerID:cri-o://1cd33e34d26767e8d3408ac072beda417966d468377ba53ade034e4195265d36,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://1cd33e34d26767e8d3408ac072beda417966d468377ba53ade034e4195265d36 Started:0xc00374b15c AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0130 19:26:07.764113   12691 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r4ktd" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:07.945557   12691 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0130 19:26:07.945609   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:26:07.948881   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:07.949292   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:26:07.949319   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:07.949502   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:26:07.949769   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:26:07.949931   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:26:07.950073   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:26:08.161811   12691 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0130 19:26:08.200237   12691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.2165107s)
	I0130 19:26:08.200290   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:08.200300   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:08.200588   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:08.200690   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:08.200716   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:08.200733   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:08.200743   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:08.200793   12691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.477308303s)
	I0130 19:26:08.200820   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:08.200833   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:08.201096   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:08.201134   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:08.201191   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:08.201195   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:08.201206   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:08.201225   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:08.201238   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:08.201447   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:08.201466   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:08.248845   12691 addons.go:234] Setting addon gcp-auth=true in "addons-663262"
	I0130 19:26:08.248898   12691 host.go:66] Checking if "addons-663262" exists ...
	I0130 19:26:08.249178   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:08.249212   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:08.263454   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44773
	I0130 19:26:08.263850   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:08.264273   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:08.264288   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:08.264638   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:08.265239   12691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:26:08.265276   12691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:08.280136   12691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45035
	I0130 19:26:08.280538   12691 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:08.281057   12691 main.go:141] libmachine: Using API Version  1
	I0130 19:26:08.281082   12691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:08.281423   12691 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:08.281652   12691 main.go:141] libmachine: (addons-663262) Calling .GetState
	I0130 19:26:08.283227   12691 main.go:141] libmachine: (addons-663262) Calling .DriverName
	I0130 19:26:08.283421   12691 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0130 19:26:08.283441   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHHostname
	I0130 19:26:08.286053   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:08.286458   12691 main.go:141] libmachine: (addons-663262) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:3b:b9", ip: ""} in network mk-addons-663262: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:d6:3b:b9 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-663262 Clientid:01:52:54:00:d6:3b:b9}
	I0130 19:26:08.286476   12691 main.go:141] libmachine: (addons-663262) DBG | domain addons-663262 has defined IP address 192.168.39.252 and MAC address 52:54:00:d6:3b:b9 in network mk-addons-663262
	I0130 19:26:08.286639   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHPort
	I0130 19:26:08.286804   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHKeyPath
	I0130 19:26:08.286951   12691 main.go:141] libmachine: (addons-663262) Calling .GetSSHUsername
	I0130 19:26:08.287077   12691 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/addons-663262/id_rsa Username:docker}
	I0130 19:26:09.743141   12691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.755292327s)
	I0130 19:26:09.743204   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:09.743217   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:09.743221   12691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.738847134s)
	I0130 19:26:09.743254   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:09.743296   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:09.743358   12691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.657542363s)
	I0130 19:26:09.743385   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:09.743395   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:09.743421   12691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.420286717s)
	I0130 19:26:09.743452   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:09.743468   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:09.743484   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:09.743515   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:09.743760   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:09.743774   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:09.743769   12691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.149502418s)
	W0130 19:26:09.743813   12691 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0130 19:26:09.743832   12691 retry.go:31] will retry after 132.540479ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0130 19:26:09.743616   12691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.394874995s)
	I0130 19:26:09.743853   12691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.095110058s)
	I0130 19:26:09.743861   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:09.743871   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:09.743875   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:09.743886   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:09.743973   12691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.873470438s)
	I0130 19:26:09.743988   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:09.743997   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:09.744101   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:09.743658   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:09.743669   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:09.744138   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:09.743676   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:09.743697   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:09.743696   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:09.744160   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:09.744164   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:09.744170   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:09.744174   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:09.744178   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:09.744183   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:09.744191   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:09.743720   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:09.744209   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:09.744217   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:09.744229   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:09.743783   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:09.743545   12691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.412698948s)
	I0130 19:26:09.745172   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:09.745183   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:09.745487   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:09.745516   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:09.745525   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:09.745535   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:09.745544   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:09.745601   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:09.745620   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:09.745628   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:09.745637   12691 addons.go:470] Verifying addon ingress=true in "addons-663262"
	I0130 19:26:09.749100   12691 out.go:177] * Verifying ingress addon...
	I0130 19:26:09.745851   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:09.745884   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:09.745903   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:09.744146   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:09.745921   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:09.745943   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:09.745964   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:09.745981   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:09.746176   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:09.747093   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:09.747093   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:09.747142   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:09.747164   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:09.751081   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:09.751097   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:09.751112   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:09.751085   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:09.751137   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:09.751149   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:09.751165   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:09.751167   12691 addons.go:470] Verifying addon metrics-server=true in "addons-663262"
	I0130 19:26:09.751172   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:09.751258   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:09.751125   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:09.751136   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:09.751581   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:09.751595   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:09.751599   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:09.751617   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:09.751628   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:09.751638   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:09.751651   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:09.751672   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:09.752983   12691 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-663262 service yakd-dashboard -n yakd-dashboard
	
	I0130 19:26:09.751681   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:09.752177   12691 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0130 19:26:09.754277   12691 addons.go:470] Verifying addon registry=true in "addons-663262"
	I0130 19:26:09.755605   12691 out.go:177] * Verifying registry addon...
	I0130 19:26:09.757703   12691 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0130 19:26:09.781890   12691 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0130 19:26:09.781913   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:09.810287   12691 pod_ready.go:102] pod "coredns-5dd5756b68-r4ktd" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:09.811230   12691 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0130 19:26:09.811257   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:09.822774   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:09.822799   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:09.823255   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:09.823259   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:09.823289   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	W0130 19:26:09.823375   12691 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0130 19:26:09.842120   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:09.842141   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:09.842383   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:09.842404   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:09.877473   12691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0130 19:26:10.289617   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:10.294686   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:10.571974   12691 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.28853505s)
	I0130 19:26:10.571960   12691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.526437271s)
	I0130 19:26:10.573749   12691 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0130 19:26:10.572135   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:10.573794   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:10.575045   12691 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0130 19:26:10.576409   12691 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0130 19:26:10.576428   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0130 19:26:10.575386   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:10.575401   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:10.576491   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:10.576515   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:10.576528   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:10.576767   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:10.576785   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:10.576796   12691 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-663262"
	I0130 19:26:10.578254   12691 out.go:177] * Verifying csi-hostpath-driver addon...
	I0130 19:26:10.580145   12691 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0130 19:26:10.638373   12691 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0130 19:26:10.638398   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0130 19:26:10.704819   12691 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0130 19:26:10.704840   12691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0130 19:26:10.777238   12691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0130 19:26:10.780829   12691 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0130 19:26:10.780847   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:10.922065   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:11.021738   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:11.134926   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:11.283744   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:11.288262   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:11.598074   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:11.794976   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:11.798261   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:12.091878   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:12.265313   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:12.265563   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:12.276093   12691 pod_ready.go:102] pod "coredns-5dd5756b68-r4ktd" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:12.436133   12691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.55860911s)
	I0130 19:26:12.436194   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:12.436215   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:12.436450   12691 main.go:141] libmachine: (addons-663262) DBG | Closing plugin on server side
	I0130 19:26:12.436497   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:12.436511   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:12.436522   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:12.436531   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:12.436745   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:12.436765   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:12.591517   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:12.775040   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:12.783006   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:13.128132   12691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.350858298s)
	I0130 19:26:13.128181   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:13.128197   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:13.128473   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:13.128494   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:13.128504   12691 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:13.128512   12691 main.go:141] libmachine: (addons-663262) Calling .Close
	I0130 19:26:13.128738   12691 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:13.128761   12691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:13.130194   12691 addons.go:470] Verifying addon gcp-auth=true in "addons-663262"
	I0130 19:26:13.131735   12691 out.go:177] * Verifying gcp-auth addon...
	I0130 19:26:13.133992   12691 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0130 19:26:13.203641   12691 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0130 19:26:13.203662   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:13.205560   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:13.280831   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:13.300708   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:13.596650   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:13.641856   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:13.765583   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:13.768765   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:14.087991   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:14.144759   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:14.270168   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:14.270168   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:14.310857   12691 pod_ready.go:102] pod "coredns-5dd5756b68-r4ktd" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:14.591054   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:14.637750   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:14.760395   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:14.762050   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:15.087158   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:15.138092   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:15.259305   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:15.264121   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:15.585958   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:15.637950   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:15.759111   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:15.765524   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:16.086623   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:16.139191   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:16.263945   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:16.268986   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:16.586738   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:16.638244   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:16.764360   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:16.764569   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:16.769552   12691 pod_ready.go:102] pod "coredns-5dd5756b68-r4ktd" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:17.086771   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:17.138489   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:17.258890   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:17.263105   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:17.590969   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:17.640381   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:17.759959   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:17.779075   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:18.087802   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:18.137951   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:18.260143   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:18.266665   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:18.586940   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:18.643054   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:18.768666   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:18.769020   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:18.780466   12691 pod_ready.go:102] pod "coredns-5dd5756b68-r4ktd" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:19.086143   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:19.144193   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:19.342025   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:19.344362   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:19.590264   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:19.652146   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:19.786185   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:19.787756   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:20.089014   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:20.138773   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:20.260417   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:20.263821   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:20.586487   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:20.638735   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:20.761778   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:20.763600   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:21.098226   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:21.159186   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:21.261251   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:21.262588   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:21.269507   12691 pod_ready.go:102] pod "coredns-5dd5756b68-r4ktd" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:21.591765   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:21.639460   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:22.030421   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:22.031377   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:22.091621   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:22.138238   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:22.265665   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:22.268523   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:22.589320   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:22.642129   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:22.769157   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:22.791162   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:23.087462   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:23.141701   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:23.268046   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:23.268679   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:23.272968   12691 pod_ready.go:102] pod "coredns-5dd5756b68-r4ktd" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:23.593091   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:23.637774   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:23.761568   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:23.768356   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:24.089118   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:24.156096   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:24.259664   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:24.263032   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:24.587750   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:24.638494   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:24.759659   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:24.763559   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:25.086783   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:25.138573   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:25.261733   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:25.267192   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:25.273236   12691 pod_ready.go:102] pod "coredns-5dd5756b68-r4ktd" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:25.586585   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:25.638461   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:25.759306   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:25.762756   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:26.091946   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:26.141111   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:26.260227   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:26.267991   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:26.586526   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:26.639277   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:26.760362   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:26.764998   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:27.086828   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:27.137804   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:27.259206   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:27.263991   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:27.587052   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:27.638265   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:27.764038   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:27.770990   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:27.776345   12691 pod_ready.go:102] pod "coredns-5dd5756b68-r4ktd" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:28.085642   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:28.138194   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:28.259611   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:28.262829   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:28.586662   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:28.638477   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:28.760114   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:28.762053   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:29.085626   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:29.146800   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:29.496113   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:29.496722   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:29.588307   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:29.650959   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:29.791200   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:29.793725   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:29.801417   12691 pod_ready.go:102] pod "coredns-5dd5756b68-r4ktd" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:30.085827   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:30.138248   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:30.260056   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:30.263120   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:30.585587   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:30.638020   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:30.759471   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:30.767796   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:31.086704   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:31.139655   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:31.418592   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:31.421373   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:31.592004   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:31.639948   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:31.762813   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:31.764418   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:32.091253   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:32.137628   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:32.262545   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:32.266232   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:32.271598   12691 pod_ready.go:102] pod "coredns-5dd5756b68-r4ktd" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:32.586237   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:32.638326   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:32.760182   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:32.763516   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:33.086548   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:33.138192   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:33.260443   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:33.267234   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:33.588475   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:33.639427   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:33.758616   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:33.761676   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:34.086682   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:34.138396   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:34.260144   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:34.263303   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:34.275082   12691 pod_ready.go:102] pod "coredns-5dd5756b68-r4ktd" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:34.586527   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:34.638300   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:34.761421   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:34.766101   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:35.086359   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:35.138310   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:35.259075   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:35.262766   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:35.586105   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:35.638994   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:35.975607   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:35.978301   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:36.086402   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:36.138335   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:36.260135   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:36.272060   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:36.277060   12691 pod_ready.go:102] pod "coredns-5dd5756b68-r4ktd" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:36.587164   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:36.638949   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:36.762254   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:36.763515   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:37.085760   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:37.139493   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:37.258728   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:37.262045   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:37.585723   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:37.641453   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:37.762894   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:37.765081   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:38.086886   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:38.138721   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:38.537777   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:38.543079   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:38.553458   12691 pod_ready.go:102] pod "coredns-5dd5756b68-r4ktd" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:38.586564   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:38.665299   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:38.761401   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:38.763658   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:39.086908   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:39.138990   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:39.259342   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:39.263030   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:39.586607   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:39.639689   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:39.758999   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:39.762318   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:40.086477   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:40.138207   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:40.261481   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:40.265548   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:40.586342   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:40.638187   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:40.763062   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:40.767912   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:40.779093   12691 pod_ready.go:92] pod "coredns-5dd5756b68-r4ktd" in "kube-system" namespace has status "Ready":"True"
	I0130 19:26:40.779116   12691 pod_ready.go:81] duration metric: took 33.014993717s waiting for pod "coredns-5dd5756b68-r4ktd" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:40.779126   12691 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-663262" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:40.786761   12691 pod_ready.go:92] pod "etcd-addons-663262" in "kube-system" namespace has status "Ready":"True"
	I0130 19:26:40.786781   12691 pod_ready.go:81] duration metric: took 7.647604ms waiting for pod "etcd-addons-663262" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:40.786792   12691 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-663262" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:40.796255   12691 pod_ready.go:92] pod "kube-apiserver-addons-663262" in "kube-system" namespace has status "Ready":"True"
	I0130 19:26:40.796273   12691 pod_ready.go:81] duration metric: took 9.473964ms waiting for pod "kube-apiserver-addons-663262" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:40.796281   12691 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-663262" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:40.807621   12691 pod_ready.go:92] pod "kube-controller-manager-addons-663262" in "kube-system" namespace has status "Ready":"True"
	I0130 19:26:40.807641   12691 pod_ready.go:81] duration metric: took 11.353219ms waiting for pod "kube-controller-manager-addons-663262" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:40.807653   12691 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q89vm" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:40.813417   12691 pod_ready.go:92] pod "kube-proxy-q89vm" in "kube-system" namespace has status "Ready":"True"
	I0130 19:26:40.813436   12691 pod_ready.go:81] duration metric: took 5.775138ms waiting for pod "kube-proxy-q89vm" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:40.813445   12691 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-663262" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:41.085943   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:41.138607   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:41.167185   12691 pod_ready.go:92] pod "kube-scheduler-addons-663262" in "kube-system" namespace has status "Ready":"True"
	I0130 19:26:41.167205   12691 pod_ready.go:81] duration metric: took 353.753383ms waiting for pod "kube-scheduler-addons-663262" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:41.167217   12691 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:41.259670   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:41.263176   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:41.585093   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:41.638299   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:41.758905   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:41.762490   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:42.086090   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:42.138080   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:42.259386   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:42.263690   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:42.586294   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:42.638356   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:42.762392   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:42.766316   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:43.088095   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:43.139523   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:43.175417   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:43.260038   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:43.262874   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:43.599687   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:43.640343   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:43.758923   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:43.761823   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:44.087276   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:44.138215   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:44.259413   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:44.262407   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:44.586880   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:44.638574   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:44.759887   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:44.766642   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:45.085181   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:45.137705   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:45.258879   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:45.262083   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:45.588889   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:45.640008   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:45.688056   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:45.759500   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:45.762378   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:46.087807   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:46.142584   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:46.258805   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:46.266480   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:46.587698   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:46.638289   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:46.758880   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:46.761906   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:47.086310   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:47.139358   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:47.259237   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:47.262518   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:47.592042   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:47.641845   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:47.759937   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:47.762900   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:48.086669   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:48.138728   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:48.173954   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:48.258600   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:48.263317   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:48.595614   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:48.639923   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:48.759105   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:48.766847   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:49.085199   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:49.138966   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:49.260256   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:49.264042   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:49.589492   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:49.638441   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:49.759037   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:49.761846   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:50.086640   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:50.138754   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:50.174885   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:50.263238   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:50.266587   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:50.593415   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:50.638779   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:50.758870   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:50.763120   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:51.085956   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:51.138366   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:51.260558   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:51.266912   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:51.595794   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:51.638185   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:51.766960   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:51.767692   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:52.087239   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:52.138286   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:52.262786   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:52.270119   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:52.587026   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:52.638196   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:52.676448   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:52.765945   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:52.766716   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:53.086373   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:53.138577   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:53.260554   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:53.262742   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:53.587787   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:53.638752   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:53.759553   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:53.762876   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:54.087037   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:54.138167   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:54.258513   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:54.262179   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:54.586617   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:54.638400   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:54.759808   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:54.763308   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:55.086988   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:55.138184   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:55.175903   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:55.261449   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:55.263514   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:55.586040   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:55.637880   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:55.763190   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:55.763893   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:56.087149   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:56.138745   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:56.259659   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:56.284229   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:56.587570   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:56.638568   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:56.760197   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:56.762240   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:57.086059   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:57.137506   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:57.261695   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:57.263008   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:57.586236   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:57.638410   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:57.675317   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:57.759282   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:57.766470   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:58.088389   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:58.138409   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:58.258598   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:58.262214   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:58.586497   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:58.638836   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:58.759507   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:58.762489   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:59.086241   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:59.137697   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:59.259421   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:59.264350   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:59.586229   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:59.637858   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:59.761441   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:59.763786   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:00.087247   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:00.137885   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:00.174541   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:27:00.259063   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:00.262071   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:00.586506   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:00.638025   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:00.759999   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:00.763064   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:01.087017   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:01.137814   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:01.258905   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:01.261931   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:01.587064   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:01.639180   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:01.758953   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:01.762248   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:02.085701   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:02.138493   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:02.260059   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:02.266637   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:02.586058   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:02.638220   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:02.674332   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:27:02.759987   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:02.764140   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:03.086646   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:03.138688   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:03.262342   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:03.264116   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:03.586978   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:03.637739   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:03.759749   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:03.762707   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:04.086291   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:04.138374   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:04.260571   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:04.264587   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:04.586992   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:04.638443   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:04.759995   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:04.762875   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:05.085986   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:05.137680   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:05.173708   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:27:05.262738   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:05.264424   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:05.585013   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:05.641699   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:05.759023   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:05.762259   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:06.085654   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:06.138790   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:06.261238   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:06.263421   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:06.586236   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:06.637853   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:06.759497   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:06.763247   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:07.086288   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:07.140695   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:07.175244   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:27:07.260698   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:07.262715   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:07.586981   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:07.639167   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:07.759232   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:07.762452   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:08.087428   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:08.138612   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:08.261283   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:08.262975   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:08.586412   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:08.637921   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:08.761232   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:08.764590   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:09.086024   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:09.140245   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:09.176327   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:27:09.259630   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:09.265854   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:09.586005   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:09.640772   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:09.759782   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:09.770510   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:10.087436   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:10.139042   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:10.259859   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:10.263443   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:10.586143   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:10.638468   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:10.759001   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:10.762445   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:11.085569   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:11.138274   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:11.261040   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:11.269621   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:11.587223   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:11.638562   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:11.680172   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:27:11.763943   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:11.766157   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:12.086144   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:12.137697   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:12.262084   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:12.263966   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:12.587141   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:12.639244   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:12.760092   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:12.766935   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:13.086486   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:13.138269   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:13.262331   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:13.262418   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:13.586872   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:13.638761   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:13.758345   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:13.762285   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:14.087496   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:14.137995   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:14.173995   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:27:14.259513   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:14.263173   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:14.587404   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:14.637982   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:14.759176   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:14.761937   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:15.088778   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:15.137457   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:15.264388   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:15.264556   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:15.586241   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:15.637192   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:15.762340   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:15.765718   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:16.085545   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:16.138981   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:16.181727   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:27:16.259768   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:16.265046   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:16.586091   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:16.638237   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:16.851909   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:16.855395   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:17.086177   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:17.138999   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:17.261142   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:17.262598   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:17.588437   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:17.643526   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:17.759740   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:17.764950   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:18.086291   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:18.145923   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:18.258327   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:18.262755   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:18.586608   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:18.638502   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:18.674444   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:27:18.760191   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:18.767993   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:19.086144   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:19.138462   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:19.266239   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:19.268008   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:19.589840   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:19.639254   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:20.142176   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:20.142423   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:20.146773   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:20.147502   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:20.259549   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:20.263248   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:20.585377   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:20.638253   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:20.676375   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:27:20.759706   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:20.768078   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:21.086557   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:21.138680   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:21.259581   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:21.265858   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:21.586073   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:21.637613   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:21.759330   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:21.762566   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:22.085378   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:22.138311   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:22.258867   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:22.261891   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:22.586347   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:22.638338   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:22.760790   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:22.762039   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:23.085604   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:23.139331   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:23.173597   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:27:23.259673   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:23.262518   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:23.585498   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:23.638516   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:23.760095   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:23.763011   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:24.086819   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:24.138885   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:24.259281   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:24.262306   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:24.586459   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:24.639029   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:24.761382   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:24.767033   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:25.085886   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:25.137397   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:25.173878   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:27:25.259967   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:25.263124   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:25.586548   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:25.639135   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:25.759116   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:25.762378   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:26.085674   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:26.138342   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:26.261704   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:26.265542   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:26.586108   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:26.638074   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:26.759756   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:26.762500   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:27.087184   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:27.137715   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:27.260031   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:27.263040   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:27.587330   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:27.637753   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:27.677530   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:27:27.759847   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:27.762757   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:28.086003   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:28.137818   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:28.260210   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:28.262620   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:28.586244   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:28.638446   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:28.761385   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:28.763622   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:29.085638   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:29.138398   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:29.261504   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:29.268009   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:29.587531   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:29.639074   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:29.760962   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:29.768607   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:30.086357   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:30.137667   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:30.175073   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:27:30.260165   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:30.264054   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:30.586351   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:30.638500   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:30.760050   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:30.763227   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:31.087112   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:31.138118   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:31.261298   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:31.268258   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:31.586741   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:31.639849   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:31.763387   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:31.763477   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:32.086822   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:32.138904   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:32.259236   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:32.262238   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:32.585426   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:32.638395   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:32.674561   12691 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"False"
	I0130 19:27:32.759566   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:32.763516   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:33.086673   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:33.137484   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:33.259278   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:33.267554   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:33.589624   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:33.647777   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:33.759146   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:33.765352   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:34.087141   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:34.138198   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:34.174696   12691 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace has status "Ready":"True"
	I0130 19:27:34.174718   12691 pod_ready.go:81] duration metric: took 53.007495707s waiting for pod "nvidia-device-plugin-daemonset-wfrjk" in "kube-system" namespace to be "Ready" ...
	I0130 19:27:34.174728   12691 pod_ready.go:38] duration metric: took 1m27.250130263s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 19:27:34.174741   12691 api_server.go:52] waiting for apiserver process to appear ...
	I0130 19:27:34.174767   12691 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 19:27:34.174820   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 19:27:34.236746   12691 cri.go:89] found id: "f276155f65e234f84dbb2ed172714ac14e086a06204aab7cc7950aec3acb2f8f"
	I0130 19:27:34.236765   12691 cri.go:89] found id: ""
	I0130 19:27:34.236772   12691 logs.go:276] 1 containers: [f276155f65e234f84dbb2ed172714ac14e086a06204aab7cc7950aec3acb2f8f]
	I0130 19:27:34.236814   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:27:34.244671   12691 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 19:27:34.244732   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 19:27:34.259673   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:34.265961   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:34.303214   12691 cri.go:89] found id: "ba39c4f4a62ce75a53294db79e4bf3a010f58734e93c8ea3a89aa97b62ed7a99"
	I0130 19:27:34.303231   12691 cri.go:89] found id: ""
	I0130 19:27:34.303237   12691 logs.go:276] 1 containers: [ba39c4f4a62ce75a53294db79e4bf3a010f58734e93c8ea3a89aa97b62ed7a99]
	I0130 19:27:34.303297   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:27:34.307071   12691 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 19:27:34.307125   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 19:27:34.354031   12691 cri.go:89] found id: "3780d6bf63bbc0e57db8aeade609b4a8babb5d03e50aed751a61a29b91daefee"
	I0130 19:27:34.354053   12691 cri.go:89] found id: ""
	I0130 19:27:34.354060   12691 logs.go:276] 1 containers: [3780d6bf63bbc0e57db8aeade609b4a8babb5d03e50aed751a61a29b91daefee]
	I0130 19:27:34.354111   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:27:34.357981   12691 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 19:27:34.358043   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 19:27:34.397544   12691 cri.go:89] found id: "a4010b25ed628eb49e325aeaa91bbb572f4dbe6b885a27dd7f83d94a87ed3613"
	I0130 19:27:34.397564   12691 cri.go:89] found id: ""
	I0130 19:27:34.397570   12691 logs.go:276] 1 containers: [a4010b25ed628eb49e325aeaa91bbb572f4dbe6b885a27dd7f83d94a87ed3613]
	I0130 19:27:34.397613   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:27:34.402196   12691 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 19:27:34.402253   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 19:27:34.439182   12691 cri.go:89] found id: "1458563d98f8a121b6821f0be7eefb43e8e9b31ad947c2f2bd1761f21e8ce8ce"
	I0130 19:27:34.439208   12691 cri.go:89] found id: ""
	I0130 19:27:34.439215   12691 logs.go:276] 1 containers: [1458563d98f8a121b6821f0be7eefb43e8e9b31ad947c2f2bd1761f21e8ce8ce]
	I0130 19:27:34.439278   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:27:34.443222   12691 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 19:27:34.443299   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 19:27:34.484278   12691 cri.go:89] found id: "00e831a7f17f9744ac51975616391805b1019d06981030a869f2c23d6def410f"
	I0130 19:27:34.484297   12691 cri.go:89] found id: ""
	I0130 19:27:34.484306   12691 logs.go:276] 1 containers: [00e831a7f17f9744ac51975616391805b1019d06981030a869f2c23d6def410f]
	I0130 19:27:34.484360   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:27:34.488967   12691 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 19:27:34.489036   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 19:27:34.534594   12691 cri.go:89] found id: ""
	I0130 19:27:34.534614   12691 logs.go:276] 0 containers: []
	W0130 19:27:34.534621   12691 logs.go:278] No container was found matching "kindnet"
	I0130 19:27:34.534628   12691 logs.go:123] Gathering logs for CRI-O ...
	I0130 19:27:34.534639   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 19:27:34.586552   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:34.637403   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:34.760623   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:34.765241   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:34.997946   12691 logs.go:123] Gathering logs for container status ...
	I0130 19:27:34.997977   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 19:27:35.074052   12691 logs.go:123] Gathering logs for kubelet ...
	I0130 19:27:35.074079   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 19:27:35.086952   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0130 19:27:35.134349   12691 logs.go:138] Found kubelet problem: Jan 30 19:26:08 addons-663262 kubelet[1255]: W0130 19:26:08.398729    1255 reflector.go:535] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-663262' and this object
	W0130 19:27:35.134562   12691 logs.go:138] Found kubelet problem: Jan 30 19:26:08 addons-663262 kubelet[1255]: E0130 19:26:08.398789    1255 reflector.go:147] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-663262' and this object
	W0130 19:27:35.137650   12691 logs.go:138] Found kubelet problem: Jan 30 19:26:08 addons-663262 kubelet[1255]: W0130 19:26:08.832731    1255 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-663262' and this object
	W0130 19:27:35.137913   12691 logs.go:138] Found kubelet problem: Jan 30 19:26:08 addons-663262 kubelet[1255]: E0130 19:26:08.832784    1255 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-663262' and this object
	I0130 19:27:35.138704   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:35.161556   12691 logs.go:123] Gathering logs for describe nodes ...
	I0130 19:27:35.161625   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 19:27:35.260954   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:35.264485   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:35.340714   12691 logs.go:123] Gathering logs for etcd [ba39c4f4a62ce75a53294db79e4bf3a010f58734e93c8ea3a89aa97b62ed7a99] ...
	I0130 19:27:35.340741   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba39c4f4a62ce75a53294db79e4bf3a010f58734e93c8ea3a89aa97b62ed7a99"
	I0130 19:27:35.415344   12691 logs.go:123] Gathering logs for kube-scheduler [a4010b25ed628eb49e325aeaa91bbb572f4dbe6b885a27dd7f83d94a87ed3613] ...
	I0130 19:27:35.415379   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4010b25ed628eb49e325aeaa91bbb572f4dbe6b885a27dd7f83d94a87ed3613"
	I0130 19:27:35.495286   12691 logs.go:123] Gathering logs for kube-proxy [1458563d98f8a121b6821f0be7eefb43e8e9b31ad947c2f2bd1761f21e8ce8ce] ...
	I0130 19:27:35.495320   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1458563d98f8a121b6821f0be7eefb43e8e9b31ad947c2f2bd1761f21e8ce8ce"
	I0130 19:27:35.534142   12691 logs.go:123] Gathering logs for kube-controller-manager [00e831a7f17f9744ac51975616391805b1019d06981030a869f2c23d6def410f] ...
	I0130 19:27:35.534169   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00e831a7f17f9744ac51975616391805b1019d06981030a869f2c23d6def410f"
	I0130 19:27:35.587186   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:35.599675   12691 logs.go:123] Gathering logs for dmesg ...
	I0130 19:27:35.599708   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 19:27:35.613134   12691 logs.go:123] Gathering logs for kube-apiserver [f276155f65e234f84dbb2ed172714ac14e086a06204aab7cc7950aec3acb2f8f] ...
	I0130 19:27:35.613159   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f276155f65e234f84dbb2ed172714ac14e086a06204aab7cc7950aec3acb2f8f"
	I0130 19:27:35.637778   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:35.667630   12691 logs.go:123] Gathering logs for coredns [3780d6bf63bbc0e57db8aeade609b4a8babb5d03e50aed751a61a29b91daefee] ...
	I0130 19:27:35.667659   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3780d6bf63bbc0e57db8aeade609b4a8babb5d03e50aed751a61a29b91daefee"
	I0130 19:27:35.727577   12691 out.go:309] Setting ErrFile to fd 2...
	I0130 19:27:35.727598   12691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 19:27:35.727642   12691 out.go:239] X Problems detected in kubelet:
	W0130 19:27:35.727651   12691 out.go:239]   Jan 30 19:26:08 addons-663262 kubelet[1255]: W0130 19:26:08.398729    1255 reflector.go:535] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-663262' and this object
	W0130 19:27:35.727658   12691 out.go:239]   Jan 30 19:26:08 addons-663262 kubelet[1255]: E0130 19:26:08.398789    1255 reflector.go:147] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-663262' and this object
	W0130 19:27:35.727667   12691 out.go:239]   Jan 30 19:26:08 addons-663262 kubelet[1255]: W0130 19:26:08.832731    1255 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-663262' and this object
	W0130 19:27:35.727676   12691 out.go:239]   Jan 30 19:26:08 addons-663262 kubelet[1255]: E0130 19:26:08.832784    1255 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-663262' and this object
	I0130 19:27:35.727681   12691 out.go:309] Setting ErrFile to fd 2...
	I0130 19:27:35.727688   12691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:27:35.762419   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:35.766584   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:36.088825   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:36.137798   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:36.259659   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:36.263076   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:36.588066   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:36.638281   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:36.761056   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:36.763482   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:37.087091   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:37.152286   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:37.259805   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:37.268262   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:37.592105   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:37.637787   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:37.760145   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:37.762954   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:38.093160   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:38.138272   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:38.259842   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:38.263199   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:38.586099   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:38.638392   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:39.034771   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:39.036674   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:39.085992   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:39.137792   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:39.263418   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:39.265957   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:39.586405   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:39.638740   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:39.759826   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:39.764794   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:40.088595   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:40.138956   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:40.264907   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:40.289760   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:40.586476   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:40.638326   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:40.760521   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:40.765797   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:41.085929   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:41.138698   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:41.259322   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:41.264306   12691 kapi.go:107] duration metric: took 1m31.506598635s to wait for kubernetes.io/minikube-addons=registry ...
	I0130 19:27:41.587970   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:41.638720   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:41.760778   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:42.086412   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:42.138450   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:42.259517   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:42.587421   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:42.641479   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:42.759962   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:43.099823   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:43.155530   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:43.554883   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:43.623682   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:43.646617   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:43.782427   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:44.087017   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:44.141947   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:44.263284   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:44.586434   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:44.638571   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:44.760788   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:45.089073   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:45.138205   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:45.260147   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:45.587899   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:45.640565   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:45.729656   12691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 19:27:45.757944   12691 api_server.go:72] duration metric: took 1m45.177638236s to wait for apiserver process to appear ...
	I0130 19:27:45.757967   12691 api_server.go:88] waiting for apiserver healthz status ...
	I0130 19:27:45.758004   12691 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 19:27:45.758056   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 19:27:45.760022   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:45.819755   12691 cri.go:89] found id: "f276155f65e234f84dbb2ed172714ac14e086a06204aab7cc7950aec3acb2f8f"
	I0130 19:27:45.819779   12691 cri.go:89] found id: ""
	I0130 19:27:45.819789   12691 logs.go:276] 1 containers: [f276155f65e234f84dbb2ed172714ac14e086a06204aab7cc7950aec3acb2f8f]
	I0130 19:27:45.819843   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:27:45.824194   12691 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 19:27:45.824262   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 19:27:45.882016   12691 cri.go:89] found id: "ba39c4f4a62ce75a53294db79e4bf3a010f58734e93c8ea3a89aa97b62ed7a99"
	I0130 19:27:45.882032   12691 cri.go:89] found id: ""
	I0130 19:27:45.882039   12691 logs.go:276] 1 containers: [ba39c4f4a62ce75a53294db79e4bf3a010f58734e93c8ea3a89aa97b62ed7a99]
	I0130 19:27:45.882080   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:27:45.891524   12691 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 19:27:45.891592   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 19:27:45.973504   12691 cri.go:89] found id: "3780d6bf63bbc0e57db8aeade609b4a8babb5d03e50aed751a61a29b91daefee"
	I0130 19:27:45.973524   12691 cri.go:89] found id: ""
	I0130 19:27:45.973532   12691 logs.go:276] 1 containers: [3780d6bf63bbc0e57db8aeade609b4a8babb5d03e50aed751a61a29b91daefee]
	I0130 19:27:45.973582   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:27:45.979178   12691 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 19:27:45.979241   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 19:27:46.033259   12691 cri.go:89] found id: "a4010b25ed628eb49e325aeaa91bbb572f4dbe6b885a27dd7f83d94a87ed3613"
	I0130 19:27:46.033288   12691 cri.go:89] found id: ""
	I0130 19:27:46.033298   12691 logs.go:276] 1 containers: [a4010b25ed628eb49e325aeaa91bbb572f4dbe6b885a27dd7f83d94a87ed3613]
	I0130 19:27:46.033356   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:27:46.040720   12691 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 19:27:46.040780   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 19:27:46.087153   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:46.096773   12691 cri.go:89] found id: "1458563d98f8a121b6821f0be7eefb43e8e9b31ad947c2f2bd1761f21e8ce8ce"
	I0130 19:27:46.096787   12691 cri.go:89] found id: ""
	I0130 19:27:46.096796   12691 logs.go:276] 1 containers: [1458563d98f8a121b6821f0be7eefb43e8e9b31ad947c2f2bd1761f21e8ce8ce]
	I0130 19:27:46.096837   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:27:46.102908   12691 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 19:27:46.102974   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 19:27:46.137851   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:46.164277   12691 cri.go:89] found id: "00e831a7f17f9744ac51975616391805b1019d06981030a869f2c23d6def410f"
	I0130 19:27:46.164297   12691 cri.go:89] found id: ""
	I0130 19:27:46.164304   12691 logs.go:276] 1 containers: [00e831a7f17f9744ac51975616391805b1019d06981030a869f2c23d6def410f]
	I0130 19:27:46.164354   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:27:46.175884   12691 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 19:27:46.175939   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 19:27:46.234324   12691 cri.go:89] found id: ""
	I0130 19:27:46.234349   12691 logs.go:276] 0 containers: []
	W0130 19:27:46.234357   12691 logs.go:278] No container was found matching "kindnet"
	I0130 19:27:46.234365   12691 logs.go:123] Gathering logs for describe nodes ...
	I0130 19:27:46.234377   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 19:27:46.259535   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:46.476750   12691 logs.go:123] Gathering logs for kube-apiserver [f276155f65e234f84dbb2ed172714ac14e086a06204aab7cc7950aec3acb2f8f] ...
	I0130 19:27:46.476783   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f276155f65e234f84dbb2ed172714ac14e086a06204aab7cc7950aec3acb2f8f"
	I0130 19:27:46.588407   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:46.589632   12691 logs.go:123] Gathering logs for etcd [ba39c4f4a62ce75a53294db79e4bf3a010f58734e93c8ea3a89aa97b62ed7a99] ...
	I0130 19:27:46.589657   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba39c4f4a62ce75a53294db79e4bf3a010f58734e93c8ea3a89aa97b62ed7a99"
	I0130 19:27:46.638712   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:46.757743   12691 logs.go:123] Gathering logs for coredns [3780d6bf63bbc0e57db8aeade609b4a8babb5d03e50aed751a61a29b91daefee] ...
	I0130 19:27:46.757773   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3780d6bf63bbc0e57db8aeade609b4a8babb5d03e50aed751a61a29b91daefee"
	I0130 19:27:46.759956   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:46.869309   12691 logs.go:123] Gathering logs for kube-controller-manager [00e831a7f17f9744ac51975616391805b1019d06981030a869f2c23d6def410f] ...
	I0130 19:27:46.869343   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00e831a7f17f9744ac51975616391805b1019d06981030a869f2c23d6def410f"
	I0130 19:27:46.963321   12691 logs.go:123] Gathering logs for CRI-O ...
	I0130 19:27:46.963365   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 19:27:47.086107   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:47.138354   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:47.259778   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:47.493279   12691 logs.go:123] Gathering logs for container status ...
	I0130 19:27:47.493313   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 19:27:47.552775   12691 logs.go:123] Gathering logs for kubelet ...
	I0130 19:27:47.552801   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 19:27:47.591732   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0130 19:27:47.620527   12691 logs.go:138] Found kubelet problem: Jan 30 19:26:08 addons-663262 kubelet[1255]: W0130 19:26:08.398729    1255 reflector.go:535] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-663262' and this object
	W0130 19:27:47.620723   12691 logs.go:138] Found kubelet problem: Jan 30 19:26:08 addons-663262 kubelet[1255]: E0130 19:26:08.398789    1255 reflector.go:147] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-663262' and this object
	W0130 19:27:47.622724   12691 logs.go:138] Found kubelet problem: Jan 30 19:26:08 addons-663262 kubelet[1255]: W0130 19:26:08.832731    1255 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-663262' and this object
	W0130 19:27:47.622903   12691 logs.go:138] Found kubelet problem: Jan 30 19:26:08 addons-663262 kubelet[1255]: E0130 19:26:08.832784    1255 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-663262' and this object
	I0130 19:27:47.642887   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:47.648692   12691 logs.go:123] Gathering logs for dmesg ...
	I0130 19:27:47.648731   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 19:27:47.709111   12691 logs.go:123] Gathering logs for kube-scheduler [a4010b25ed628eb49e325aeaa91bbb572f4dbe6b885a27dd7f83d94a87ed3613] ...
	I0130 19:27:47.709147   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4010b25ed628eb49e325aeaa91bbb572f4dbe6b885a27dd7f83d94a87ed3613"
	I0130 19:27:47.760653   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:47.815539   12691 logs.go:123] Gathering logs for kube-proxy [1458563d98f8a121b6821f0be7eefb43e8e9b31ad947c2f2bd1761f21e8ce8ce] ...
	I0130 19:27:47.815565   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1458563d98f8a121b6821f0be7eefb43e8e9b31ad947c2f2bd1761f21e8ce8ce"
	I0130 19:27:47.880597   12691 out.go:309] Setting ErrFile to fd 2...
	I0130 19:27:47.880629   12691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 19:27:47.880690   12691 out.go:239] X Problems detected in kubelet:
	W0130 19:27:47.880704   12691 out.go:239]   Jan 30 19:26:08 addons-663262 kubelet[1255]: W0130 19:26:08.398729    1255 reflector.go:535] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-663262' and this object
	W0130 19:27:47.880713   12691 out.go:239]   Jan 30 19:26:08 addons-663262 kubelet[1255]: E0130 19:26:08.398789    1255 reflector.go:147] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-663262' and this object
	W0130 19:27:47.880728   12691 out.go:239]   Jan 30 19:26:08 addons-663262 kubelet[1255]: W0130 19:26:08.832731    1255 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-663262' and this object
	W0130 19:27:47.880736   12691 out.go:239]   Jan 30 19:26:08 addons-663262 kubelet[1255]: E0130 19:26:08.832784    1255 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-663262' and this object
	I0130 19:27:47.880744   12691 out.go:309] Setting ErrFile to fd 2...
	I0130 19:27:47.880752   12691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:27:48.086762   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:48.139151   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:48.260139   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:48.586287   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:48.638376   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:48.759960   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:49.085948   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:49.138397   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:49.538784   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:49.587527   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:49.639511   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:49.762276   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:50.085944   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:50.138209   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:50.263665   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:50.587128   12691 kapi.go:107] duration metric: took 1m40.006982042s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0130 19:27:50.638281   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:50.760358   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:51.138495   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:51.259709   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:51.638584   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:51.759995   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:52.138004   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:52.263591   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:52.641503   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:52.759061   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:53.138291   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:53.259877   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:53.637805   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:53.759321   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:54.138891   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:54.259701   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:54.638606   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:54.772985   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:55.138330   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:55.260641   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:55.638619   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:55.760298   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:56.139990   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:56.259901   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:56.638422   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:56.760597   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:57.137870   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:57.259626   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:57.639589   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:57.759153   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:57.881765   12691 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0130 19:27:57.886813   12691 api_server.go:279] https://192.168.39.252:8443/healthz returned 200:
	ok
	I0130 19:27:57.888041   12691 api_server.go:141] control plane version: v1.28.4
	I0130 19:27:57.888060   12691 api_server.go:131] duration metric: took 12.130086426s to wait for apiserver health ...
	I0130 19:27:57.888067   12691 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 19:27:57.888086   12691 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 19:27:57.888128   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 19:27:57.946874   12691 cri.go:89] found id: "f276155f65e234f84dbb2ed172714ac14e086a06204aab7cc7950aec3acb2f8f"
	I0130 19:27:57.946898   12691 cri.go:89] found id: ""
	I0130 19:27:57.946907   12691 logs.go:276] 1 containers: [f276155f65e234f84dbb2ed172714ac14e086a06204aab7cc7950aec3acb2f8f]
	I0130 19:27:57.946951   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:27:57.952275   12691 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 19:27:57.952329   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 19:27:57.996837   12691 cri.go:89] found id: "ba39c4f4a62ce75a53294db79e4bf3a010f58734e93c8ea3a89aa97b62ed7a99"
	I0130 19:27:57.996859   12691 cri.go:89] found id: ""
	I0130 19:27:57.996869   12691 logs.go:276] 1 containers: [ba39c4f4a62ce75a53294db79e4bf3a010f58734e93c8ea3a89aa97b62ed7a99]
	I0130 19:27:57.996912   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:27:58.001473   12691 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 19:27:58.001529   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 19:27:58.048838   12691 cri.go:89] found id: "3780d6bf63bbc0e57db8aeade609b4a8babb5d03e50aed751a61a29b91daefee"
	I0130 19:27:58.048866   12691 cri.go:89] found id: ""
	I0130 19:27:58.048873   12691 logs.go:276] 1 containers: [3780d6bf63bbc0e57db8aeade609b4a8babb5d03e50aed751a61a29b91daefee]
	I0130 19:27:58.048914   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:27:58.053224   12691 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 19:27:58.053290   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 19:27:58.104223   12691 cri.go:89] found id: "a4010b25ed628eb49e325aeaa91bbb572f4dbe6b885a27dd7f83d94a87ed3613"
	I0130 19:27:58.104246   12691 cri.go:89] found id: ""
	I0130 19:27:58.104255   12691 logs.go:276] 1 containers: [a4010b25ed628eb49e325aeaa91bbb572f4dbe6b885a27dd7f83d94a87ed3613]
	I0130 19:27:58.104311   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:27:58.109003   12691 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 19:27:58.109071   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 19:27:58.138646   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:58.155571   12691 cri.go:89] found id: "1458563d98f8a121b6821f0be7eefb43e8e9b31ad947c2f2bd1761f21e8ce8ce"
	I0130 19:27:58.155599   12691 cri.go:89] found id: ""
	I0130 19:27:58.155609   12691 logs.go:276] 1 containers: [1458563d98f8a121b6821f0be7eefb43e8e9b31ad947c2f2bd1761f21e8ce8ce]
	I0130 19:27:58.155663   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:27:58.160138   12691 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 19:27:58.160207   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 19:27:58.212117   12691 cri.go:89] found id: "00e831a7f17f9744ac51975616391805b1019d06981030a869f2c23d6def410f"
	I0130 19:27:58.212135   12691 cri.go:89] found id: ""
	I0130 19:27:58.212142   12691 logs.go:276] 1 containers: [00e831a7f17f9744ac51975616391805b1019d06981030a869f2c23d6def410f]
	I0130 19:27:58.212189   12691 ssh_runner.go:195] Run: which crictl
	I0130 19:27:58.216264   12691 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 19:27:58.216306   12691 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 19:27:58.257190   12691 cri.go:89] found id: ""
	I0130 19:27:58.257217   12691 logs.go:276] 0 containers: []
	W0130 19:27:58.257226   12691 logs.go:278] No container was found matching "kindnet"
	I0130 19:27:58.257235   12691 logs.go:123] Gathering logs for container status ...
	I0130 19:27:58.257247   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 19:27:58.258862   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:58.319420   12691 logs.go:123] Gathering logs for dmesg ...
	I0130 19:27:58.319445   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 19:27:58.338256   12691 logs.go:123] Gathering logs for kube-scheduler [a4010b25ed628eb49e325aeaa91bbb572f4dbe6b885a27dd7f83d94a87ed3613] ...
	I0130 19:27:58.338280   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4010b25ed628eb49e325aeaa91bbb572f4dbe6b885a27dd7f83d94a87ed3613"
	I0130 19:27:58.388195   12691 logs.go:123] Gathering logs for kube-proxy [1458563d98f8a121b6821f0be7eefb43e8e9b31ad947c2f2bd1761f21e8ce8ce] ...
	I0130 19:27:58.388226   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1458563d98f8a121b6821f0be7eefb43e8e9b31ad947c2f2bd1761f21e8ce8ce"
	I0130 19:27:58.434032   12691 logs.go:123] Gathering logs for kube-controller-manager [00e831a7f17f9744ac51975616391805b1019d06981030a869f2c23d6def410f] ...
	I0130 19:27:58.434057   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00e831a7f17f9744ac51975616391805b1019d06981030a869f2c23d6def410f"
	I0130 19:27:58.496295   12691 logs.go:123] Gathering logs for CRI-O ...
	I0130 19:27:58.496327   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 19:27:58.638241   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:58.759790   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:59.139154   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:59.146244   12691 logs.go:123] Gathering logs for kubelet ...
	I0130 19:27:59.146277   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 19:27:59.197439   12691 logs.go:138] Found kubelet problem: Jan 30 19:26:08 addons-663262 kubelet[1255]: W0130 19:26:08.398729    1255 reflector.go:535] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-663262' and this object
	W0130 19:27:59.197607   12691 logs.go:138] Found kubelet problem: Jan 30 19:26:08 addons-663262 kubelet[1255]: E0130 19:26:08.398789    1255 reflector.go:147] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-663262' and this object
	W0130 19:27:59.199525   12691 logs.go:138] Found kubelet problem: Jan 30 19:26:08 addons-663262 kubelet[1255]: W0130 19:26:08.832731    1255 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-663262' and this object
	W0130 19:27:59.199670   12691 logs.go:138] Found kubelet problem: Jan 30 19:26:08 addons-663262 kubelet[1255]: E0130 19:26:08.832784    1255 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-663262' and this object
	I0130 19:27:59.224539   12691 logs.go:123] Gathering logs for describe nodes ...
	I0130 19:27:59.224566   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 19:27:59.259413   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:59.384725   12691 logs.go:123] Gathering logs for kube-apiserver [f276155f65e234f84dbb2ed172714ac14e086a06204aab7cc7950aec3acb2f8f] ...
	I0130 19:27:59.384761   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f276155f65e234f84dbb2ed172714ac14e086a06204aab7cc7950aec3acb2f8f"
	I0130 19:27:59.439918   12691 logs.go:123] Gathering logs for etcd [ba39c4f4a62ce75a53294db79e4bf3a010f58734e93c8ea3a89aa97b62ed7a99] ...
	I0130 19:27:59.439959   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba39c4f4a62ce75a53294db79e4bf3a010f58734e93c8ea3a89aa97b62ed7a99"
	I0130 19:27:59.510360   12691 logs.go:123] Gathering logs for coredns [3780d6bf63bbc0e57db8aeade609b4a8babb5d03e50aed751a61a29b91daefee] ...
	I0130 19:27:59.510395   12691 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3780d6bf63bbc0e57db8aeade609b4a8babb5d03e50aed751a61a29b91daefee"
	I0130 19:27:59.549293   12691 out.go:309] Setting ErrFile to fd 2...
	I0130 19:27:59.549319   12691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 19:27:59.549374   12691 out.go:239] X Problems detected in kubelet:
	W0130 19:27:59.549384   12691 out.go:239]   Jan 30 19:26:08 addons-663262 kubelet[1255]: W0130 19:26:08.398729    1255 reflector.go:535] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-663262' and this object
	W0130 19:27:59.549393   12691 out.go:239]   Jan 30 19:26:08 addons-663262 kubelet[1255]: E0130 19:26:08.398789    1255 reflector.go:147] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-663262' and this object
	W0130 19:27:59.549403   12691 out.go:239]   Jan 30 19:26:08 addons-663262 kubelet[1255]: W0130 19:26:08.832731    1255 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-663262' and this object
	W0130 19:27:59.549414   12691 out.go:239]   Jan 30 19:26:08 addons-663262 kubelet[1255]: E0130 19:26:08.832784    1255 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663262" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-663262' and this object
	I0130 19:27:59.549422   12691 out.go:309] Setting ErrFile to fd 2...
	I0130 19:27:59.549435   12691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:27:59.638901   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:59.759848   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:00.138636   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:00.260590   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:00.637735   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:00.759633   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:01.138450   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:01.259540   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:01.638753   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:01.759830   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:02.138410   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:02.259571   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:02.662241   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:02.762147   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:03.138739   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:03.259014   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:03.646860   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:03.760987   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:04.139217   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:04.259997   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:04.638748   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:04.759237   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:05.138230   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:05.259743   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:05.638165   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:05.762090   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:06.138654   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:06.259229   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:06.639079   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:06.761376   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:07.138392   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:07.258964   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:07.638618   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:07.759291   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:08.139636   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:08.258762   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:08.644349   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:08.761228   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:09.138716   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:09.259029   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:09.561711   12691 system_pods.go:59] 18 kube-system pods found
	I0130 19:28:09.561743   12691 system_pods.go:61] "coredns-5dd5756b68-r4ktd" [085b2aab-fb8d-4ef5-835f-da54ed4b6ad4] Running
	I0130 19:28:09.561748   12691 system_pods.go:61] "csi-hostpath-attacher-0" [c94d98ee-92e9-4c31-a29b-97f4764327b1] Running
	I0130 19:28:09.561752   12691 system_pods.go:61] "csi-hostpath-resizer-0" [73d6810d-15b6-44f5-bea4-451ed50846e9] Running
	I0130 19:28:09.561757   12691 system_pods.go:61] "csi-hostpathplugin-rl8t7" [a6749615-fe63-4919-92ba-16122bdc9608] Running
	I0130 19:28:09.561768   12691 system_pods.go:61] "etcd-addons-663262" [57ca6584-5d02-4a08-89fd-bc7c617dba77] Running
	I0130 19:28:09.561772   12691 system_pods.go:61] "kube-apiserver-addons-663262" [3644af6d-a37e-48bb-9358-fc21600b2ea5] Running
	I0130 19:28:09.561777   12691 system_pods.go:61] "kube-controller-manager-addons-663262" [ffbfa446-75a5-4f0e-b8e1-f48d78e0c294] Running
	I0130 19:28:09.561781   12691 system_pods.go:61] "kube-ingress-dns-minikube" [1d47e36b-b93a-4940-bd3f-de7a05ece7ed] Running
	I0130 19:28:09.561785   12691 system_pods.go:61] "kube-proxy-q89vm" [d2ab0e3f-c53b-4ef2-9e79-925c916daccb] Running
	I0130 19:28:09.561792   12691 system_pods.go:61] "kube-scheduler-addons-663262" [4a18aba6-3384-489e-a7fb-a78a2672a2ed] Running
	I0130 19:28:09.561796   12691 system_pods.go:61] "metrics-server-7c66d45ddc-nxh8w" [3c2117ed-7cab-4d9a-8960-57004b317d18] Running
	I0130 19:28:09.561800   12691 system_pods.go:61] "nvidia-device-plugin-daemonset-wfrjk" [fad394cb-bffb-41c2-825e-f94efd52f7c8] Running
	I0130 19:28:09.561805   12691 system_pods.go:61] "registry-proxy-n8wz9" [fff0fc97-43df-44b0-b675-f7fab6617f6b] Running
	I0130 19:28:09.561809   12691 system_pods.go:61] "registry-w2wdf" [cbacb56e-d023-4053-959e-f949629b5e23] Running
	I0130 19:28:09.561816   12691 system_pods.go:61] "snapshot-controller-58dbcc7b99-f92tj" [fbe5f4f2-8c33-4c97-a56f-7668ff2f0588] Running
	I0130 19:28:09.561819   12691 system_pods.go:61] "snapshot-controller-58dbcc7b99-vgb8x" [4ec6e124-e734-44d5-a465-9381aa10b656] Running
	I0130 19:28:09.561823   12691 system_pods.go:61] "storage-provisioner" [fd31ea74-8db4-466e-b08c-b4f0a078bd04] Running
	I0130 19:28:09.561828   12691 system_pods.go:61] "tiller-deploy-7b677967b9-dffh2" [ea9293b4-e84a-4770-8323-32899c9e383c] Running
	I0130 19:28:09.561836   12691 system_pods.go:74] duration metric: took 11.673761745s to wait for pod list to return data ...
	I0130 19:28:09.561849   12691 default_sa.go:34] waiting for default service account to be created ...
	I0130 19:28:09.564391   12691 default_sa.go:45] found service account: "default"
	I0130 19:28:09.564409   12691 default_sa.go:55] duration metric: took 2.55475ms for default service account to be created ...
	I0130 19:28:09.564416   12691 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 19:28:09.575530   12691 system_pods.go:86] 18 kube-system pods found
	I0130 19:28:09.575553   12691 system_pods.go:89] "coredns-5dd5756b68-r4ktd" [085b2aab-fb8d-4ef5-835f-da54ed4b6ad4] Running
	I0130 19:28:09.575559   12691 system_pods.go:89] "csi-hostpath-attacher-0" [c94d98ee-92e9-4c31-a29b-97f4764327b1] Running
	I0130 19:28:09.575563   12691 system_pods.go:89] "csi-hostpath-resizer-0" [73d6810d-15b6-44f5-bea4-451ed50846e9] Running
	I0130 19:28:09.575567   12691 system_pods.go:89] "csi-hostpathplugin-rl8t7" [a6749615-fe63-4919-92ba-16122bdc9608] Running
	I0130 19:28:09.575571   12691 system_pods.go:89] "etcd-addons-663262" [57ca6584-5d02-4a08-89fd-bc7c617dba77] Running
	I0130 19:28:09.575575   12691 system_pods.go:89] "kube-apiserver-addons-663262" [3644af6d-a37e-48bb-9358-fc21600b2ea5] Running
	I0130 19:28:09.575582   12691 system_pods.go:89] "kube-controller-manager-addons-663262" [ffbfa446-75a5-4f0e-b8e1-f48d78e0c294] Running
	I0130 19:28:09.575586   12691 system_pods.go:89] "kube-ingress-dns-minikube" [1d47e36b-b93a-4940-bd3f-de7a05ece7ed] Running
	I0130 19:28:09.575590   12691 system_pods.go:89] "kube-proxy-q89vm" [d2ab0e3f-c53b-4ef2-9e79-925c916daccb] Running
	I0130 19:28:09.575598   12691 system_pods.go:89] "kube-scheduler-addons-663262" [4a18aba6-3384-489e-a7fb-a78a2672a2ed] Running
	I0130 19:28:09.575604   12691 system_pods.go:89] "metrics-server-7c66d45ddc-nxh8w" [3c2117ed-7cab-4d9a-8960-57004b317d18] Running
	I0130 19:28:09.575609   12691 system_pods.go:89] "nvidia-device-plugin-daemonset-wfrjk" [fad394cb-bffb-41c2-825e-f94efd52f7c8] Running
	I0130 19:28:09.575615   12691 system_pods.go:89] "registry-proxy-n8wz9" [fff0fc97-43df-44b0-b675-f7fab6617f6b] Running
	I0130 19:28:09.575619   12691 system_pods.go:89] "registry-w2wdf" [cbacb56e-d023-4053-959e-f949629b5e23] Running
	I0130 19:28:09.575626   12691 system_pods.go:89] "snapshot-controller-58dbcc7b99-f92tj" [fbe5f4f2-8c33-4c97-a56f-7668ff2f0588] Running
	I0130 19:28:09.575630   12691 system_pods.go:89] "snapshot-controller-58dbcc7b99-vgb8x" [4ec6e124-e734-44d5-a465-9381aa10b656] Running
	I0130 19:28:09.575636   12691 system_pods.go:89] "storage-provisioner" [fd31ea74-8db4-466e-b08c-b4f0a078bd04] Running
	I0130 19:28:09.575640   12691 system_pods.go:89] "tiller-deploy-7b677967b9-dffh2" [ea9293b4-e84a-4770-8323-32899c9e383c] Running
	I0130 19:28:09.575647   12691 system_pods.go:126] duration metric: took 11.226338ms to wait for k8s-apps to be running ...
	I0130 19:28:09.575655   12691 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 19:28:09.575698   12691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 19:28:09.592856   12691 system_svc.go:56] duration metric: took 17.192339ms WaitForService to wait for kubelet.
	I0130 19:28:09.592880   12691 kubeadm.go:581] duration metric: took 2m9.012584722s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 19:28:09.592903   12691 node_conditions.go:102] verifying NodePressure condition ...
	I0130 19:28:09.596319   12691 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 19:28:09.596347   12691 node_conditions.go:123] node cpu capacity is 2
	I0130 19:28:09.596362   12691 node_conditions.go:105] duration metric: took 3.439444ms to run NodePressure ...
	I0130 19:28:09.596375   12691 start.go:228] waiting for startup goroutines ...
	I0130 19:28:09.637649   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:09.758900   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:10.138944   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:10.259924   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:10.638675   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:10.759511   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:11.141544   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:11.258974   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:11.638310   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:11.762082   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:12.140897   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:12.259454   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:12.639558   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:12.758511   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:13.142048   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:13.259553   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:13.638908   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:13.759584   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:14.138049   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:14.259713   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:14.638211   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:14.760766   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:15.141588   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:15.259176   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:15.638901   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:15.759318   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:16.138474   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:16.260953   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:16.639058   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:16.760678   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:17.138844   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:17.260679   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:17.638824   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:17.760353   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:18.139079   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:18.259418   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:18.639715   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:18.759048   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:19.138223   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:19.264882   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:19.639833   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:19.800492   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:20.141087   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:20.259630   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:20.639305   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:20.760781   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:21.139863   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:21.260184   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:21.639370   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:21.760607   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:22.138888   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:22.260102   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:22.639040   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:22.759403   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:23.138090   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:23.258793   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:23.639116   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:23.759862   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:24.138872   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:24.259200   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:24.638155   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:24.759937   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:25.141692   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:25.259018   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:25.638374   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:25.760986   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:26.138510   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:26.259836   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:26.638620   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:26.764101   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:27.143081   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:27.259412   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:27.637693   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:27.759611   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:28.137758   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:28.274678   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:28.638145   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:28.759300   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:29.138203   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:29.259500   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:29.642402   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:29.762049   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:30.137439   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:30.259083   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:30.638170   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:30.761638   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:31.142964   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:31.260622   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:31.639151   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:31.760253   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:32.138560   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:32.261364   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:32.638874   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:32.759122   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:33.138446   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:33.259725   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:33.638044   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:33.759951   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:34.138645   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:34.258965   12691 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:34.638494   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:34.758753   12691 kapi.go:107] duration metric: took 2m25.006578821s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0130 19:28:35.138410   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:35.638168   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:36.139465   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:36.640793   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:37.138617   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:37.641168   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:38.316442   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:38.637977   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:39.138631   12691 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:39.638855   12691 kapi.go:107] duration metric: took 2m26.504863723s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0130 19:28:39.640571   12691 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-663262 cluster.
	I0130 19:28:39.641844   12691 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0130 19:28:39.642910   12691 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0130 19:28:39.644007   12691 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, metrics-server, helm-tiller, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0130 19:28:39.645196   12691 addons.go:505] enable addons completed in 2m39.616768626s: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner metrics-server helm-tiller inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0130 19:28:39.645228   12691 start.go:233] waiting for cluster config update ...
	I0130 19:28:39.645241   12691 start.go:242] writing updated cluster config ...
	I0130 19:28:39.645447   12691 ssh_runner.go:195] Run: rm -f paused
	I0130 19:28:39.695824   12691 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 19:28:39.697768   12691 out.go:177] * Done! kubectl is now configured to use "addons-663262" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 19:25:16 UTC, ends at Tue 2024-01-30 19:31:33 UTC. --
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.856057176Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706643092856040499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575989,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=bd64eb31-f89c-4053-9aa9-0c9815aa1fdc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.856856927Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4f4234a6-a812-4e14-9d9f-8dc4305b06a4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.856929930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4f4234a6-a812-4e14-9d9f-8dc4305b06a4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.857270673Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:12626938ea3cb69f1f2ae294a1b6cd2f50455e95c8762bfed15c887cb0a8cc46,PodSandboxId:db05d2e3075ed02809f14a31bd11bad9f1240baa0a8f7fae7bbb6c2ff9fc4c86,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706643084880415746,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-xbxd2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4fc9505d-88a3-4bf7-a36b-9c82aafc6767,},Annotations:map[string]string{io.kubernetes.container.hash: 727a31b9,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61706464601edec1e3133bb5389ec01aa2ab6f406389b45dc4f830328b817b57,PodSandboxId:026ca814a4e3dcfb8193f56f591a0afa7ebfb29632250a24cd7c750e86db8b38,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,State:CONTAINER_RUNNING,CreatedAt:1706642943261810278,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18353055-d3bc-4d56-9040-5d238a7d772c,},Annotations:map[string]string{io.kubernet
es.container.hash: ee558c6d,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732e17c8bd5baaff5a4eb2c25057609fa27dac0120fef14c2bdce0ed851280b6,PodSandboxId:169b89f6ba5538748fc08b433dc90e3269d394d9e7e7ffbf53496ddbb0b8f6c2,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1706642930349757651,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-n64s5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 2fcc34b1-9bbf-4735-9978-febdfce4af37,},Annotations:map[string]string{io.kubernetes.container.hash: c6a45ebc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2f49b1f3a9b988906ad8f5c59141832c8bd608908754d80bd7c0db4c6b89d,PodSandboxId:87bca31ac7550c0fc927a8b48dea2c6be93325f61083f46f25088f0b785e2fec,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1706642918409876691,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-h69t7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 044d901c-78ca-47f7-bc29-c9e52cc18d8e,},Annotations:map[string]string{io.kubernetes.container.hash: cfc2e5de,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a70160dcdffe4b4c365298032bfc21e616aabe0c1be7e7c9301c813b1d1910b,PodSandboxId:1f330acbf710d087e319b677fab7fafa242eac36ec28c377662ac0776d0faf80,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706642853617702158,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qbssv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ed8a3816-aae3-4b52-bb2c-ac44b0bc00a8,},Annotations:map[string]string{io.kubernetes.container.hash: 9f16749,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ae78849f4af6d9f08b4d734cc7143c4cac9ecb31ad82602cb42fa713a49d991,PodSandboxId:f8e5d8199105b8c26b42e8a5a1086b39b0c6f864bf241f071a156d959ed1651f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certge
n@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706642837511887322,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jrbpn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ff13e8c3-aed8-46ec-807b-449a1334966a,},Annotations:map[string]string{io.kubernetes.container.hash: e4338f47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebdf720e9a65ec0582d6a07ad4a10c6cbd2bbd18cd2a63aa8e2420f6eb784d6f,PodSandboxId:6ace016078b0624f798978852cc3ee0c132eb313393def5d3a4cab5b88e3efbf,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]str
ing{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1706642835867413737,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-dkd6v,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c43290cd-725a-4206-ae63-46a322eb06d2,},Annotations:map[string]string{io.kubernetes.container.hash: 5337fd2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fc71cf51ad74043cc0e01527317a7094610a5780bb372ef99af56ce9005efe,PodSandboxId:0a2f99158dc9cb52f52d2b41603be1de2947a79fa63934b093a59feb42e5da02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a
562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706642810550475168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd31ea74-8db4-466e-b08c-b4f0a078bd04,},Annotations:map[string]string{io.kubernetes.container.hash: 4e9199ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0235a5cc93812da6b9f723493609bd67fa828c4999147239a324b87f7d66a793,PodSandboxId:eeed07f8ff805250082e44b144ef44b3464079639b1921d0d1efe594f7c5fe17,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03f
edab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_EXITED,CreatedAt:1706642799211516273,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d47e36b-b93a-4940-bd3f-de7a05ece7ed,},Annotations:map[string]string{io.kubernetes.container.hash: 97a195ca,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24644720b4039de15d72a2e2cf0cb5416fd67bf5bddec1809550281a20a26128,PodSandboxId:53aa1a54f5f2d899e5941dc2e249e3adfdef1771ebb1ea34a7a1321f9a1e1b18,Me
tadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1706642783180264262,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-6vskh,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: f2a8b988-80f1-491d-8d0c-f9d7d229fc3c,},Annotations:map[string]string{io.kubernetes.container.hash: 44d3c5b5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab341bde2fdbd71d3b601f8818e44c30e9b3630192bb6e63e2
358254409c5675,PodSandboxId:0a2f99158dc9cb52f52d2b41603be1de2947a79fa63934b093a59feb42e5da02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706642777586886862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd31ea74-8db4-466e-b08c-b4f0a078bd04,},Annotations:map[string]string{io.kubernetes.container.hash: 4e9199ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1458563d98f8a121b6821f0be7eefb43e8e9b31ad947c2f2bd17
61f21e8ce8ce,PodSandboxId:5790fc7026bc3d5321e22f6f949374677d96809fad459138ef59287929d7be08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706642776018443557,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q89vm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2ab0e3f-c53b-4ef2-9e79-925c916daccb,},Annotations:map[string]string{io.kubernetes.container.hash: 933b0158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3780d6bf63bbc0e57db8aeade609b4a8babb5d03e50aed751a61a29b91daefee,PodSandboxId:344feef85
1175ba2c04d27a9f6be444e2b5a06585285f0c9b3dc2b0d3c368c7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706642765303246690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r4ktd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085b2aab-fb8d-4ef5-835f-da54ed4b6ad4,},Annotations:map[string]string{io.kubernetes.container.hash: 66cf7f73,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4010b25ed628eb49e325aeaa91bbb572f4dbe6b885a27dd7f83d94a87ed3613,PodSandboxId:c8bda1b94b38e7edd7e1ef781cf9f6ebade5ecde6345dbfa32197b5d275b3bde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706642740399972648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-663262,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c279f7a9d2eafe55967b6c05f6adcf8b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba39c4f4a62ce75a53294db79e4bf3a010f58734e93c8ea3a89aa97b62ed7a99,PodSandboxId:28d2192565f240d115a69ebf7cb563eb10773b931e636b064385a3e3286398b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706642740280296573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-663262,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f2a69698f1d2351e5bab5f83a84173,},Annotations:map[string]string{io.kubernetes.container.hash: 54fc9af3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:00e831a7f17f9744ac51975616391805b1019d06981030a869f2c23d6def410f,PodSandboxId:443c611f8481f412eb3f271f8ea37248ec27fcd3ba1b85439fe8b435db5deb09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706642740083497982,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-663262,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5e31ef97da84c5d9705debeaffd97d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f276155f65e234f84dbb2ed172714ac14e086a06204aab7cc7950aec3acb2f8f,PodSandboxId:3f40940f89db41ce54c60c19c0e1d08cd62b38eb30f14a04a43998d60a322db1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706642739992029764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-663262,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab667d1158f3157def0734547766a7ef,},Annotations:map[string]string{io.kubernetes.container.hash: e6bf972e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.po
d.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4f4234a6-a812-4e14-9d9f-8dc4305b06a4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.895893414Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a4169d89-6258-4650-bd75-d5a007aaba7f name=/runtime.v1.RuntimeService/Version
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.895971295Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a4169d89-6258-4650-bd75-d5a007aaba7f name=/runtime.v1.RuntimeService/Version
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.897952001Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c8095af7-e4fb-45be-8517-4b9f68fbefbb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.899288955Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706643092899157935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575989,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=c8095af7-e4fb-45be-8517-4b9f68fbefbb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.900303381Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cc036d2a-e70f-4f96-9a49-700bd170779d name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.900433912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cc036d2a-e70f-4f96-9a49-700bd170779d name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.900757634Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:12626938ea3cb69f1f2ae294a1b6cd2f50455e95c8762bfed15c887cb0a8cc46,PodSandboxId:db05d2e3075ed02809f14a31bd11bad9f1240baa0a8f7fae7bbb6c2ff9fc4c86,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706643084880415746,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-xbxd2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4fc9505d-88a3-4bf7-a36b-9c82aafc6767,},Annotations:map[string]string{io.kubernetes.container.hash: 727a31b9,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61706464601edec1e3133bb5389ec01aa2ab6f406389b45dc4f830328b817b57,PodSandboxId:026ca814a4e3dcfb8193f56f591a0afa7ebfb29632250a24cd7c750e86db8b38,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,State:CONTAINER_RUNNING,CreatedAt:1706642943261810278,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18353055-d3bc-4d56-9040-5d238a7d772c,},Annotations:map[string]string{io.kubernet
es.container.hash: ee558c6d,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732e17c8bd5baaff5a4eb2c25057609fa27dac0120fef14c2bdce0ed851280b6,PodSandboxId:169b89f6ba5538748fc08b433dc90e3269d394d9e7e7ffbf53496ddbb0b8f6c2,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1706642930349757651,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-n64s5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 2fcc34b1-9bbf-4735-9978-febdfce4af37,},Annotations:map[string]string{io.kubernetes.container.hash: c6a45ebc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2f49b1f3a9b988906ad8f5c59141832c8bd608908754d80bd7c0db4c6b89d,PodSandboxId:87bca31ac7550c0fc927a8b48dea2c6be93325f61083f46f25088f0b785e2fec,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1706642918409876691,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-h69t7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 044d901c-78ca-47f7-bc29-c9e52cc18d8e,},Annotations:map[string]string{io.kubernetes.container.hash: cfc2e5de,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a70160dcdffe4b4c365298032bfc21e616aabe0c1be7e7c9301c813b1d1910b,PodSandboxId:1f330acbf710d087e319b677fab7fafa242eac36ec28c377662ac0776d0faf80,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706642853617702158,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qbssv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ed8a3816-aae3-4b52-bb2c-ac44b0bc00a8,},Annotations:map[string]string{io.kubernetes.container.hash: 9f16749,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ae78849f4af6d9f08b4d734cc7143c4cac9ecb31ad82602cb42fa713a49d991,PodSandboxId:f8e5d8199105b8c26b42e8a5a1086b39b0c6f864bf241f071a156d959ed1651f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certge
n@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706642837511887322,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jrbpn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ff13e8c3-aed8-46ec-807b-449a1334966a,},Annotations:map[string]string{io.kubernetes.container.hash: e4338f47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebdf720e9a65ec0582d6a07ad4a10c6cbd2bbd18cd2a63aa8e2420f6eb784d6f,PodSandboxId:6ace016078b0624f798978852cc3ee0c132eb313393def5d3a4cab5b88e3efbf,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]str
ing{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1706642835867413737,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-dkd6v,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c43290cd-725a-4206-ae63-46a322eb06d2,},Annotations:map[string]string{io.kubernetes.container.hash: 5337fd2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fc71cf51ad74043cc0e01527317a7094610a5780bb372ef99af56ce9005efe,PodSandboxId:0a2f99158dc9cb52f52d2b41603be1de2947a79fa63934b093a59feb42e5da02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a
562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706642810550475168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd31ea74-8db4-466e-b08c-b4f0a078bd04,},Annotations:map[string]string{io.kubernetes.container.hash: 4e9199ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0235a5cc93812da6b9f723493609bd67fa828c4999147239a324b87f7d66a793,PodSandboxId:eeed07f8ff805250082e44b144ef44b3464079639b1921d0d1efe594f7c5fe17,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03f
edab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_EXITED,CreatedAt:1706642799211516273,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d47e36b-b93a-4940-bd3f-de7a05ece7ed,},Annotations:map[string]string{io.kubernetes.container.hash: 97a195ca,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24644720b4039de15d72a2e2cf0cb5416fd67bf5bddec1809550281a20a26128,PodSandboxId:53aa1a54f5f2d899e5941dc2e249e3adfdef1771ebb1ea34a7a1321f9a1e1b18,Me
tadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1706642783180264262,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-6vskh,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: f2a8b988-80f1-491d-8d0c-f9d7d229fc3c,},Annotations:map[string]string{io.kubernetes.container.hash: 44d3c5b5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab341bde2fdbd71d3b601f8818e44c30e9b3630192bb6e63e2
358254409c5675,PodSandboxId:0a2f99158dc9cb52f52d2b41603be1de2947a79fa63934b093a59feb42e5da02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706642777586886862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd31ea74-8db4-466e-b08c-b4f0a078bd04,},Annotations:map[string]string{io.kubernetes.container.hash: 4e9199ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1458563d98f8a121b6821f0be7eefb43e8e9b31ad947c2f2bd17
61f21e8ce8ce,PodSandboxId:5790fc7026bc3d5321e22f6f949374677d96809fad459138ef59287929d7be08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706642776018443557,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q89vm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2ab0e3f-c53b-4ef2-9e79-925c916daccb,},Annotations:map[string]string{io.kubernetes.container.hash: 933b0158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3780d6bf63bbc0e57db8aeade609b4a8babb5d03e50aed751a61a29b91daefee,PodSandboxId:344feef85
1175ba2c04d27a9f6be444e2b5a06585285f0c9b3dc2b0d3c368c7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706642765303246690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r4ktd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085b2aab-fb8d-4ef5-835f-da54ed4b6ad4,},Annotations:map[string]string{io.kubernetes.container.hash: 66cf7f73,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4010b25ed628eb49e325aeaa91bbb572f4dbe6b885a27dd7f83d94a87ed3613,PodSandboxId:c8bda1b94b38e7edd7e1ef781cf9f6ebade5ecde6345dbfa32197b5d275b3bde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706642740399972648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-663262,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c279f7a9d2eafe55967b6c05f6adcf8b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba39c4f4a62ce75a53294db79e4bf3a010f58734e93c8ea3a89aa97b62ed7a99,PodSandboxId:28d2192565f240d115a69ebf7cb563eb10773b931e636b064385a3e3286398b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706642740280296573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-663262,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f2a69698f1d2351e5bab5f83a84173,},Annotations:map[string]string{io.kubernetes.container.hash: 54fc9af3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:00e831a7f17f9744ac51975616391805b1019d06981030a869f2c23d6def410f,PodSandboxId:443c611f8481f412eb3f271f8ea37248ec27fcd3ba1b85439fe8b435db5deb09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706642740083497982,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-663262,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5e31ef97da84c5d9705debeaffd97d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f276155f65e234f84dbb2ed172714ac14e086a06204aab7cc7950aec3acb2f8f,PodSandboxId:3f40940f89db41ce54c60c19c0e1d08cd62b38eb30f14a04a43998d60a322db1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706642739992029764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-663262,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab667d1158f3157def0734547766a7ef,},Annotations:map[string]string{io.kubernetes.container.hash: e6bf972e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.po
d.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cc036d2a-e70f-4f96-9a49-700bd170779d name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.937365372Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=00667dea-f02d-450c-8bd7-7d8aa41afb6d name=/runtime.v1.RuntimeService/Version
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.937445976Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=00667dea-f02d-450c-8bd7-7d8aa41afb6d name=/runtime.v1.RuntimeService/Version
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.938641841Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=05d9e12a-376d-4647-abae-25b1e19a8737 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.939887587Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706643092939873168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575989,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=05d9e12a-376d-4647-abae-25b1e19a8737 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.940702516Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5802cadc-5132-463d-a27e-06bb742ae473 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.940778313Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5802cadc-5132-463d-a27e-06bb742ae473 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.941257362Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:12626938ea3cb69f1f2ae294a1b6cd2f50455e95c8762bfed15c887cb0a8cc46,PodSandboxId:db05d2e3075ed02809f14a31bd11bad9f1240baa0a8f7fae7bbb6c2ff9fc4c86,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706643084880415746,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-xbxd2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4fc9505d-88a3-4bf7-a36b-9c82aafc6767,},Annotations:map[string]string{io.kubernetes.container.hash: 727a31b9,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61706464601edec1e3133bb5389ec01aa2ab6f406389b45dc4f830328b817b57,PodSandboxId:026ca814a4e3dcfb8193f56f591a0afa7ebfb29632250a24cd7c750e86db8b38,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,State:CONTAINER_RUNNING,CreatedAt:1706642943261810278,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18353055-d3bc-4d56-9040-5d238a7d772c,},Annotations:map[string]string{io.kubernet
es.container.hash: ee558c6d,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732e17c8bd5baaff5a4eb2c25057609fa27dac0120fef14c2bdce0ed851280b6,PodSandboxId:169b89f6ba5538748fc08b433dc90e3269d394d9e7e7ffbf53496ddbb0b8f6c2,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1706642930349757651,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-n64s5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 2fcc34b1-9bbf-4735-9978-febdfce4af37,},Annotations:map[string]string{io.kubernetes.container.hash: c6a45ebc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2f49b1f3a9b988906ad8f5c59141832c8bd608908754d80bd7c0db4c6b89d,PodSandboxId:87bca31ac7550c0fc927a8b48dea2c6be93325f61083f46f25088f0b785e2fec,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1706642918409876691,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-h69t7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 044d901c-78ca-47f7-bc29-c9e52cc18d8e,},Annotations:map[string]string{io.kubernetes.container.hash: cfc2e5de,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a70160dcdffe4b4c365298032bfc21e616aabe0c1be7e7c9301c813b1d1910b,PodSandboxId:1f330acbf710d087e319b677fab7fafa242eac36ec28c377662ac0776d0faf80,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706642853617702158,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qbssv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ed8a3816-aae3-4b52-bb2c-ac44b0bc00a8,},Annotations:map[string]string{io.kubernetes.container.hash: 9f16749,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ae78849f4af6d9f08b4d734cc7143c4cac9ecb31ad82602cb42fa713a49d991,PodSandboxId:f8e5d8199105b8c26b42e8a5a1086b39b0c6f864bf241f071a156d959ed1651f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certge
n@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706642837511887322,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jrbpn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ff13e8c3-aed8-46ec-807b-449a1334966a,},Annotations:map[string]string{io.kubernetes.container.hash: e4338f47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebdf720e9a65ec0582d6a07ad4a10c6cbd2bbd18cd2a63aa8e2420f6eb784d6f,PodSandboxId:6ace016078b0624f798978852cc3ee0c132eb313393def5d3a4cab5b88e3efbf,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]str
ing{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1706642835867413737,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-dkd6v,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c43290cd-725a-4206-ae63-46a322eb06d2,},Annotations:map[string]string{io.kubernetes.container.hash: 5337fd2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fc71cf51ad74043cc0e01527317a7094610a5780bb372ef99af56ce9005efe,PodSandboxId:0a2f99158dc9cb52f52d2b41603be1de2947a79fa63934b093a59feb42e5da02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a
562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706642810550475168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd31ea74-8db4-466e-b08c-b4f0a078bd04,},Annotations:map[string]string{io.kubernetes.container.hash: 4e9199ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0235a5cc93812da6b9f723493609bd67fa828c4999147239a324b87f7d66a793,PodSandboxId:eeed07f8ff805250082e44b144ef44b3464079639b1921d0d1efe594f7c5fe17,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03f
edab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_EXITED,CreatedAt:1706642799211516273,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d47e36b-b93a-4940-bd3f-de7a05ece7ed,},Annotations:map[string]string{io.kubernetes.container.hash: 97a195ca,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24644720b4039de15d72a2e2cf0cb5416fd67bf5bddec1809550281a20a26128,PodSandboxId:53aa1a54f5f2d899e5941dc2e249e3adfdef1771ebb1ea34a7a1321f9a1e1b18,Me
tadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1706642783180264262,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-6vskh,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: f2a8b988-80f1-491d-8d0c-f9d7d229fc3c,},Annotations:map[string]string{io.kubernetes.container.hash: 44d3c5b5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab341bde2fdbd71d3b601f8818e44c30e9b3630192bb6e63e2
358254409c5675,PodSandboxId:0a2f99158dc9cb52f52d2b41603be1de2947a79fa63934b093a59feb42e5da02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706642777586886862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd31ea74-8db4-466e-b08c-b4f0a078bd04,},Annotations:map[string]string{io.kubernetes.container.hash: 4e9199ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1458563d98f8a121b6821f0be7eefb43e8e9b31ad947c2f2bd17
61f21e8ce8ce,PodSandboxId:5790fc7026bc3d5321e22f6f949374677d96809fad459138ef59287929d7be08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706642776018443557,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q89vm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2ab0e3f-c53b-4ef2-9e79-925c916daccb,},Annotations:map[string]string{io.kubernetes.container.hash: 933b0158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3780d6bf63bbc0e57db8aeade609b4a8babb5d03e50aed751a61a29b91daefee,PodSandboxId:344feef85
1175ba2c04d27a9f6be444e2b5a06585285f0c9b3dc2b0d3c368c7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706642765303246690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r4ktd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085b2aab-fb8d-4ef5-835f-da54ed4b6ad4,},Annotations:map[string]string{io.kubernetes.container.hash: 66cf7f73,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4010b25ed628eb49e325aeaa91bbb572f4dbe6b885a27dd7f83d94a87ed3613,PodSandboxId:c8bda1b94b38e7edd7e1ef781cf9f6ebade5ecde6345dbfa32197b5d275b3bde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706642740399972648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-663262,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c279f7a9d2eafe55967b6c05f6adcf8b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba39c4f4a62ce75a53294db79e4bf3a010f58734e93c8ea3a89aa97b62ed7a99,PodSandboxId:28d2192565f240d115a69ebf7cb563eb10773b931e636b064385a3e3286398b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706642740280296573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-663262,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f2a69698f1d2351e5bab5f83a84173,},Annotations:map[string]string{io.kubernetes.container.hash: 54fc9af3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:00e831a7f17f9744ac51975616391805b1019d06981030a869f2c23d6def410f,PodSandboxId:443c611f8481f412eb3f271f8ea37248ec27fcd3ba1b85439fe8b435db5deb09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706642740083497982,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-663262,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5e31ef97da84c5d9705debeaffd97d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f276155f65e234f84dbb2ed172714ac14e086a06204aab7cc7950aec3acb2f8f,PodSandboxId:3f40940f89db41ce54c60c19c0e1d08cd62b38eb30f14a04a43998d60a322db1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706642739992029764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-663262,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab667d1158f3157def0734547766a7ef,},Annotations:map[string]string{io.kubernetes.container.hash: e6bf972e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.po
d.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5802cadc-5132-463d-a27e-06bb742ae473 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.982992512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fb600fd9-c7e2-4892-927c-cd7dfd6b4a63 name=/runtime.v1.RuntimeService/Version
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.983073928Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fb600fd9-c7e2-4892-927c-cd7dfd6b4a63 name=/runtime.v1.RuntimeService/Version
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.984102758Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=93994294-7d60-4981-8c64-57895b47e11b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.985467032Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706643092985450173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575989,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=93994294-7d60-4981-8c64-57895b47e11b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.986002837Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3b525894-b989-46e6-915c-2fb4d310274e name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.986053326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3b525894-b989-46e6-915c-2fb4d310274e name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:31:32 addons-663262 crio[716]: time="2024-01-30 19:31:32.986556361Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:12626938ea3cb69f1f2ae294a1b6cd2f50455e95c8762bfed15c887cb0a8cc46,PodSandboxId:db05d2e3075ed02809f14a31bd11bad9f1240baa0a8f7fae7bbb6c2ff9fc4c86,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706643084880415746,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-xbxd2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4fc9505d-88a3-4bf7-a36b-9c82aafc6767,},Annotations:map[string]string{io.kubernetes.container.hash: 727a31b9,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61706464601edec1e3133bb5389ec01aa2ab6f406389b45dc4f830328b817b57,PodSandboxId:026ca814a4e3dcfb8193f56f591a0afa7ebfb29632250a24cd7c750e86db8b38,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,State:CONTAINER_RUNNING,CreatedAt:1706642943261810278,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18353055-d3bc-4d56-9040-5d238a7d772c,},Annotations:map[string]string{io.kubernet
es.container.hash: ee558c6d,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732e17c8bd5baaff5a4eb2c25057609fa27dac0120fef14c2bdce0ed851280b6,PodSandboxId:169b89f6ba5538748fc08b433dc90e3269d394d9e7e7ffbf53496ddbb0b8f6c2,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1706642930349757651,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-n64s5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 2fcc34b1-9bbf-4735-9978-febdfce4af37,},Annotations:map[string]string{io.kubernetes.container.hash: c6a45ebc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2f49b1f3a9b988906ad8f5c59141832c8bd608908754d80bd7c0db4c6b89d,PodSandboxId:87bca31ac7550c0fc927a8b48dea2c6be93325f61083f46f25088f0b785e2fec,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1706642918409876691,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-h69t7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 044d901c-78ca-47f7-bc29-c9e52cc18d8e,},Annotations:map[string]string{io.kubernetes.container.hash: cfc2e5de,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a70160dcdffe4b4c365298032bfc21e616aabe0c1be7e7c9301c813b1d1910b,PodSandboxId:1f330acbf710d087e319b677fab7fafa242eac36ec28c377662ac0776d0faf80,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706642853617702158,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qbssv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ed8a3816-aae3-4b52-bb2c-ac44b0bc00a8,},Annotations:map[string]string{io.kubernetes.container.hash: 9f16749,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ae78849f4af6d9f08b4d734cc7143c4cac9ecb31ad82602cb42fa713a49d991,PodSandboxId:f8e5d8199105b8c26b42e8a5a1086b39b0c6f864bf241f071a156d959ed1651f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certge
n@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706642837511887322,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jrbpn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ff13e8c3-aed8-46ec-807b-449a1334966a,},Annotations:map[string]string{io.kubernetes.container.hash: e4338f47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebdf720e9a65ec0582d6a07ad4a10c6cbd2bbd18cd2a63aa8e2420f6eb784d6f,PodSandboxId:6ace016078b0624f798978852cc3ee0c132eb313393def5d3a4cab5b88e3efbf,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]str
ing{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1706642835867413737,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-dkd6v,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c43290cd-725a-4206-ae63-46a322eb06d2,},Annotations:map[string]string{io.kubernetes.container.hash: 5337fd2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fc71cf51ad74043cc0e01527317a7094610a5780bb372ef99af56ce9005efe,PodSandboxId:0a2f99158dc9cb52f52d2b41603be1de2947a79fa63934b093a59feb42e5da02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a
562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706642810550475168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd31ea74-8db4-466e-b08c-b4f0a078bd04,},Annotations:map[string]string{io.kubernetes.container.hash: 4e9199ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0235a5cc93812da6b9f723493609bd67fa828c4999147239a324b87f7d66a793,PodSandboxId:eeed07f8ff805250082e44b144ef44b3464079639b1921d0d1efe594f7c5fe17,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03f
edab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_EXITED,CreatedAt:1706642799211516273,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d47e36b-b93a-4940-bd3f-de7a05ece7ed,},Annotations:map[string]string{io.kubernetes.container.hash: 97a195ca,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24644720b4039de15d72a2e2cf0cb5416fd67bf5bddec1809550281a20a26128,PodSandboxId:53aa1a54f5f2d899e5941dc2e249e3adfdef1771ebb1ea34a7a1321f9a1e1b18,Me
tadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1706642783180264262,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-6vskh,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: f2a8b988-80f1-491d-8d0c-f9d7d229fc3c,},Annotations:map[string]string{io.kubernetes.container.hash: 44d3c5b5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab341bde2fdbd71d3b601f8818e44c30e9b3630192bb6e63e2
358254409c5675,PodSandboxId:0a2f99158dc9cb52f52d2b41603be1de2947a79fa63934b093a59feb42e5da02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706642777586886862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd31ea74-8db4-466e-b08c-b4f0a078bd04,},Annotations:map[string]string{io.kubernetes.container.hash: 4e9199ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1458563d98f8a121b6821f0be7eefb43e8e9b31ad947c2f2bd17
61f21e8ce8ce,PodSandboxId:5790fc7026bc3d5321e22f6f949374677d96809fad459138ef59287929d7be08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706642776018443557,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q89vm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2ab0e3f-c53b-4ef2-9e79-925c916daccb,},Annotations:map[string]string{io.kubernetes.container.hash: 933b0158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3780d6bf63bbc0e57db8aeade609b4a8babb5d03e50aed751a61a29b91daefee,PodSandboxId:344feef85
1175ba2c04d27a9f6be444e2b5a06585285f0c9b3dc2b0d3c368c7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706642765303246690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r4ktd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085b2aab-fb8d-4ef5-835f-da54ed4b6ad4,},Annotations:map[string]string{io.kubernetes.container.hash: 66cf7f73,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4010b25ed628eb49e325aeaa91bbb572f4dbe6b885a27dd7f83d94a87ed3613,PodSandboxId:c8bda1b94b38e7edd7e1ef781cf9f6ebade5ecde6345dbfa32197b5d275b3bde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706642740399972648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-663262,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c279f7a9d2eafe55967b6c05f6adcf8b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba39c4f4a62ce75a53294db79e4bf3a010f58734e93c8ea3a89aa97b62ed7a99,PodSandboxId:28d2192565f240d115a69ebf7cb563eb10773b931e636b064385a3e3286398b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706642740280296573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-663262,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f2a69698f1d2351e5bab5f83a84173,},Annotations:map[string]string{io.kubernetes.container.hash: 54fc9af3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:00e831a7f17f9744ac51975616391805b1019d06981030a869f2c23d6def410f,PodSandboxId:443c611f8481f412eb3f271f8ea37248ec27fcd3ba1b85439fe8b435db5deb09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706642740083497982,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-663262,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5e31ef97da84c5d9705debeaffd97d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f276155f65e234f84dbb2ed172714ac14e086a06204aab7cc7950aec3acb2f8f,PodSandboxId:3f40940f89db41ce54c60c19c0e1d08cd62b38eb30f14a04a43998d60a322db1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706642739992029764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-663262,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab667d1158f3157def0734547766a7ef,},Annotations:map[string]string{io.kubernetes.container.hash: e6bf972e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.po
d.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3b525894-b989-46e6-915c-2fb4d310274e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	12626938ea3cb       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   db05d2e3075ed       hello-world-app-5d77478584-xbxd2
	61706464601ed       docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25                              2 minutes ago       Running             nginx                     0                   026ca814a4e3d       nginx
	732e17c8bd5ba       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   169b89f6ba553       headlamp-7ddfbb94ff-n64s5
	f8f2f49b1f3a9       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   87bca31ac7550       gcp-auth-d4c87556c-h69t7
	4a70160dcdffe       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              patch                     0                   1f330acbf710d       ingress-nginx-admission-patch-qbssv
	5ae78849f4af6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   4 minutes ago       Exited              create                    0                   f8e5d8199105b       ingress-nginx-admission-create-jrbpn
	ebdf720e9a65e       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   6ace016078b06       local-path-provisioner-78b46b4d5c-dkd6v
	73fc71cf51ad7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       1                   0a2f99158dc9c       storage-provisioner
	0235a5cc93812       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f             4 minutes ago       Exited              minikube-ingress-dns      0                   eeed07f8ff805       kube-ingress-dns-minikube
	24644720b4039       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              5 minutes ago       Running             yakd                      0                   53aa1a54f5f2d       yakd-dashboard-9947fc6bf-6vskh
	ab341bde2fdbd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Exited              storage-provisioner       0                   0a2f99158dc9c       storage-provisioner
	1458563d98f8a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             5 minutes ago       Running             kube-proxy                0                   5790fc7026bc3       kube-proxy-q89vm
	3780d6bf63bbc       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             5 minutes ago       Running             coredns                   0                   344feef851175       coredns-5dd5756b68-r4ktd
	a4010b25ed628       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             5 minutes ago       Running             kube-scheduler            0                   c8bda1b94b38e       kube-scheduler-addons-663262
	ba39c4f4a62ce       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             5 minutes ago       Running             etcd                      0                   28d2192565f24       etcd-addons-663262
	00e831a7f17f9       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             5 minutes ago       Running             kube-controller-manager   0                   443c611f8481f       kube-controller-manager-addons-663262
	f276155f65e23       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             5 minutes ago       Running             kube-apiserver            0                   3f40940f89db4       kube-apiserver-addons-663262
	
	
	==> coredns [3780d6bf63bbc0e57db8aeade609b4a8babb5d03e50aed751a61a29b91daefee] <==
	[INFO] 10.244.0.9:59425 - 22131 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000151215s
	[INFO] 10.244.0.9:36886 - 45377 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000333463s
	[INFO] 10.244.0.9:36886 - 7235 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000063203s
	[INFO] 10.244.0.9:59648 - 54834 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000057479s
	[INFO] 10.244.0.9:59648 - 63036 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000562s
	[INFO] 10.244.0.9:55128 - 20633 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000125172s
	[INFO] 10.244.0.9:55128 - 44954 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000066965s
	[INFO] 10.244.0.9:52397 - 13114 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000079997s
	[INFO] 10.244.0.9:52397 - 22790 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000055562s
	[INFO] 10.244.0.9:42458 - 43566 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000044425s
	[INFO] 10.244.0.9:42458 - 61729 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000031881s
	[INFO] 10.244.0.9:55242 - 31701 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000177405s
	[INFO] 10.244.0.9:55242 - 27091 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034682s
	[INFO] 10.244.0.9:51471 - 61149 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000037312s
	[INFO] 10.244.0.9:51471 - 49875 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000033319s
	[INFO] 10.244.0.22:52330 - 48802 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000562113s
	[INFO] 10.244.0.22:52720 - 28345 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000233101s
	[INFO] 10.244.0.22:36622 - 42407 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000128992s
	[INFO] 10.244.0.22:40631 - 60992 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000077334s
	[INFO] 10.244.0.22:50220 - 39580 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000058805s
	[INFO] 10.244.0.22:57663 - 20326 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008697s
	[INFO] 10.244.0.22:57182 - 49628 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000672062s
	[INFO] 10.244.0.22:57889 - 12262 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.000342818s
	[INFO] 10.244.0.25:39014 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000202217s
	[INFO] 10.244.0.25:58669 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000105087s
	
	
	==> describe nodes <==
	Name:               addons-663262
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-663262
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218
	                    minikube.k8s.io/name=addons-663262
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T19_25_47_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-663262
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 19:25:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-663262
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 19:31:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 19:29:53 +0000   Tue, 30 Jan 2024 19:25:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 19:29:53 +0000   Tue, 30 Jan 2024 19:25:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 19:29:53 +0000   Tue, 30 Jan 2024 19:25:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 19:29:53 +0000   Tue, 30 Jan 2024 19:25:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.252
	  Hostname:    addons-663262
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd13cc2ab38d4705b9f9ccdb429188f3
	  System UUID:                cd13cc2a-b38d-4705-b9f9-ccdb429188f3
	  Boot ID:                    16e04988-7176-4f12-aae1-086e6205a7ad
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-xbxd2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  gcp-auth                    gcp-auth-d4c87556c-h69t7                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  headlamp                    headlamp-7ddfbb94ff-n64s5                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	  kube-system                 coredns-5dd5756b68-r4ktd                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m33s
	  kube-system                 etcd-addons-663262                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m46s
	  kube-system                 kube-apiserver-addons-663262               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m46s
	  kube-system                 kube-controller-manager-addons-663262      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m46s
	  kube-system                 kube-proxy-q89vm                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m34s
	  kube-system                 kube-scheduler-addons-663262               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m46s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  local-path-storage          local-path-provisioner-78b46b4d5c-dkd6v    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-6vskh             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m13s                  kube-proxy       
	  Normal  Starting                 5m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m54s (x5 over 5m54s)  kubelet          Node addons-663262 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m54s (x5 over 5m54s)  kubelet          Node addons-663262 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m54s (x5 over 5m54s)  kubelet          Node addons-663262 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m46s                  kubelet          Node addons-663262 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m46s                  kubelet          Node addons-663262 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m46s                  kubelet          Node addons-663262 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m46s                  kubelet          Node addons-663262 status is now: NodeReady
	  Normal  RegisteredNode           5m34s                  node-controller  Node addons-663262 event: Registered Node addons-663262 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.969927] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.103881] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.149760] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.103590] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.190912] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[  +9.200564] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +8.735816] systemd-fstab-generator[1248]: Ignoring "noauto" for root device
	[Jan30 19:26] kauditd_printk_skb: 58 callbacks suppressed
	[  +8.469080] kauditd_printk_skb: 15 callbacks suppressed
	[ +12.829858] kauditd_printk_skb: 16 callbacks suppressed
	[Jan30 19:27] kauditd_printk_skb: 18 callbacks suppressed
	[Jan30 19:28] kauditd_printk_skb: 28 callbacks suppressed
	[ +23.691295] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.597334] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.066490] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.500339] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.414076] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.703251] kauditd_printk_skb: 16 callbacks suppressed
	[Jan30 19:29] kauditd_printk_skb: 16 callbacks suppressed
	[  +9.156968] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.889804] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.908396] kauditd_printk_skb: 7 callbacks suppressed
	[ +17.221040] kauditd_printk_skb: 12 callbacks suppressed
	[Jan30 19:31] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [ba39c4f4a62ce75a53294db79e4bf3a010f58734e93c8ea3a89aa97b62ed7a99] <==
	{"level":"warn","ts":"2024-01-30T19:27:43.524043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"269.063228ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13864"}
	{"level":"info","ts":"2024-01-30T19:27:43.524083Z","caller":"traceutil/trace.go:171","msg":"trace[448826451] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1123; }","duration":"269.106077ms","start":"2024-01-30T19:27:43.254972Z","end":"2024-01-30T19:27:43.524078Z","steps":["trace[448826451] 'agreement among raft nodes before linearized reading'  (duration: 269.029312ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T19:27:43.524209Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.896311ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-30T19:27:43.524244Z","caller":"traceutil/trace.go:171","msg":"trace[883306938] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1123; }","duration":"193.932063ms","start":"2024-01-30T19:27:43.330307Z","end":"2024-01-30T19:27:43.524239Z","steps":["trace[883306938] 'agreement among raft nodes before linearized reading'  (duration: 193.885244ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T19:27:49.533066Z","caller":"traceutil/trace.go:171","msg":"trace[1274727849] linearizableReadLoop","detail":"{readStateIndex:1203; appliedIndex:1202; }","duration":"278.913545ms","start":"2024-01-30T19:27:49.254137Z","end":"2024-01-30T19:27:49.533051Z","steps":["trace[1274727849] 'read index received'  (duration: 278.752905ms)","trace[1274727849] 'applied index is now lower than readState.Index'  (duration: 160.227µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-30T19:27:49.533302Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.160883ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13864"}
	{"level":"info","ts":"2024-01-30T19:27:49.533486Z","caller":"traceutil/trace.go:171","msg":"trace[1637354042] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1163; }","duration":"279.363255ms","start":"2024-01-30T19:27:49.254114Z","end":"2024-01-30T19:27:49.533478Z","steps":["trace[1637354042] 'agreement among raft nodes before linearized reading'  (duration: 279.060907ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T19:27:49.533405Z","caller":"traceutil/trace.go:171","msg":"trace[1033246303] transaction","detail":"{read_only:false; response_revision:1163; number_of_response:1; }","duration":"294.933705ms","start":"2024-01-30T19:27:49.23845Z","end":"2024-01-30T19:27:49.533384Z","steps":["trace[1033246303] 'process raft request'  (duration: 294.470194ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T19:28:38.310433Z","caller":"traceutil/trace.go:171","msg":"trace[1516344271] linearizableReadLoop","detail":"{readStateIndex:1318; appliedIndex:1317; }","duration":"175.281393ms","start":"2024-01-30T19:28:38.135026Z","end":"2024-01-30T19:28:38.310307Z","steps":["trace[1516344271] 'read index received'  (duration: 175.145701ms)","trace[1516344271] 'applied index is now lower than readState.Index'  (duration: 133.993µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-30T19:28:38.310716Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.636095ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4149"}
	{"level":"info","ts":"2024-01-30T19:28:38.310743Z","caller":"traceutil/trace.go:171","msg":"trace[195400333] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1268; }","duration":"175.768012ms","start":"2024-01-30T19:28:38.134969Z","end":"2024-01-30T19:28:38.310738Z","steps":["trace[195400333] 'agreement among raft nodes before linearized reading'  (duration: 175.60672ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T19:28:38.310967Z","caller":"traceutil/trace.go:171","msg":"trace[2002019610] transaction","detail":"{read_only:false; response_revision:1268; number_of_response:1; }","duration":"300.821223ms","start":"2024-01-30T19:28:38.010133Z","end":"2024-01-30T19:28:38.310954Z","steps":["trace[2002019610] 'process raft request'  (duration: 300.080033ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T19:28:38.311074Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-30T19:28:38.010119Z","time spent":"300.866239ms","remote":"127.0.0.1:43932","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1265 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-01-30T19:28:50.058129Z","caller":"traceutil/trace.go:171","msg":"trace[616245168] linearizableReadLoop","detail":"{readStateIndex:1404; appliedIndex:1403; }","duration":"252.066023ms","start":"2024-01-30T19:28:49.80605Z","end":"2024-01-30T19:28:50.058116Z","steps":["trace[616245168] 'read index received'  (duration: 251.891063ms)","trace[616245168] 'applied index is now lower than readState.Index'  (duration: 174.58µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-30T19:28:50.058405Z","caller":"traceutil/trace.go:171","msg":"trace[2138494672] transaction","detail":"{read_only:false; response_revision:1351; number_of_response:1; }","duration":"377.393502ms","start":"2024-01-30T19:28:49.680999Z","end":"2024-01-30T19:28:50.058392Z","steps":["trace[2138494672] 'process raft request'  (duration: 376.98543ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T19:28:50.058524Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-30T19:28:49.680985Z","time spent":"377.478352ms","remote":"127.0.0.1:43956","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1318 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-01-30T19:28:50.05855Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.507682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:85452"}
	{"level":"info","ts":"2024-01-30T19:28:50.058711Z","caller":"traceutil/trace.go:171","msg":"trace[456320380] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:1351; }","duration":"252.676601ms","start":"2024-01-30T19:28:49.806025Z","end":"2024-01-30T19:28:50.058702Z","steps":["trace[456320380] 'agreement among raft nodes before linearized reading'  (duration: 252.217412ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T19:29:02.21103Z","caller":"traceutil/trace.go:171","msg":"trace[1510097714] transaction","detail":"{read_only:false; response_revision:1492; number_of_response:1; }","duration":"139.877346ms","start":"2024-01-30T19:29:02.071137Z","end":"2024-01-30T19:29:02.211014Z","steps":["trace[1510097714] 'process raft request'  (duration: 137.122187ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T19:29:25.476849Z","caller":"traceutil/trace.go:171","msg":"trace[340177527] linearizableReadLoop","detail":"{readStateIndex:1698; appliedIndex:1697; }","duration":"169.23054ms","start":"2024-01-30T19:29:25.307604Z","end":"2024-01-30T19:29:25.476834Z","steps":["trace[340177527] 'read index received'  (duration: 169.073402ms)","trace[340177527] 'applied index is now lower than readState.Index'  (duration: 156.769µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-30T19:29:25.476998Z","caller":"traceutil/trace.go:171","msg":"trace[172019669] transaction","detail":"{read_only:false; response_revision:1634; number_of_response:1; }","duration":"223.44924ms","start":"2024-01-30T19:29:25.25354Z","end":"2024-01-30T19:29:25.476989Z","steps":["trace[172019669] 'process raft request'  (duration: 223.184691ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T19:29:25.477184Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.582206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:9395"}
	{"level":"info","ts":"2024-01-30T19:29:25.477283Z","caller":"traceutil/trace.go:171","msg":"trace[353195677] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1634; }","duration":"169.695118ms","start":"2024-01-30T19:29:25.30758Z","end":"2024-01-30T19:29:25.477275Z","steps":["trace[353195677] 'agreement among raft nodes before linearized reading'  (duration: 169.52646ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T19:29:25.477436Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.593367ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-30T19:29:25.477495Z","caller":"traceutil/trace.go:171","msg":"trace[462522003] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1634; }","duration":"149.656532ms","start":"2024-01-30T19:29:25.327831Z","end":"2024-01-30T19:29:25.477487Z","steps":["trace[462522003] 'agreement among raft nodes before linearized reading'  (duration: 149.576172ms)"],"step_count":1}
	
	
	==> gcp-auth [f8f2f49b1f3a9b988906ad8f5c59141832c8bd608908754d80bd7c0db4c6b89d] <==
	2024/01/30 19:28:41 Ready to write response ...
	2024/01/30 19:28:41 Ready to marshal response ...
	2024/01/30 19:28:41 Ready to write response ...
	2024/01/30 19:28:45 Ready to marshal response ...
	2024/01/30 19:28:45 Ready to write response ...
	2024/01/30 19:28:51 Ready to marshal response ...
	2024/01/30 19:28:51 Ready to write response ...
	2024/01/30 19:28:56 Ready to marshal response ...
	2024/01/30 19:28:56 Ready to write response ...
	2024/01/30 19:28:57 Ready to marshal response ...
	2024/01/30 19:28:57 Ready to write response ...
	2024/01/30 19:29:02 Ready to marshal response ...
	2024/01/30 19:29:02 Ready to write response ...
	2024/01/30 19:29:07 Ready to marshal response ...
	2024/01/30 19:29:07 Ready to write response ...
	2024/01/30 19:29:07 Ready to marshal response ...
	2024/01/30 19:29:07 Ready to write response ...
	2024/01/30 19:29:12 Ready to marshal response ...
	2024/01/30 19:29:12 Ready to write response ...
	2024/01/30 19:29:26 Ready to marshal response ...
	2024/01/30 19:29:26 Ready to write response ...
	2024/01/30 19:29:31 Ready to marshal response ...
	2024/01/30 19:29:31 Ready to write response ...
	2024/01/30 19:31:21 Ready to marshal response ...
	2024/01/30 19:31:21 Ready to write response ...
	
	
	==> kernel <==
	 19:31:33 up 6 min,  0 users,  load average: 0.88, 1.80, 1.00
	Linux addons-663262 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f276155f65e234f84dbb2ed172714ac14e086a06204aab7cc7950aec3acb2f8f] <==
	I0130 19:29:00.111436       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0130 19:29:00.132411       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0130 19:29:01.150964       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0130 19:29:27.107482       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0130 19:29:47.591018       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0130 19:29:49.062370       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0130 19:29:49.062476       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0130 19:29:49.072809       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0130 19:29:49.072911       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0130 19:29:49.080525       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0130 19:29:49.080589       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0130 19:29:49.101653       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0130 19:29:49.101832       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0130 19:29:49.111662       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0130 19:29:49.111748       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0130 19:29:49.131099       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0130 19:29:49.131186       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0130 19:29:49.135697       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0130 19:29:49.135773       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0130 19:29:49.154272       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0130 19:29:49.154404       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0130 19:29:50.080961       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0130 19:29:50.156593       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0130 19:29:50.159819       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0130 19:31:21.767713       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.72.106"}
	
	
	==> kube-controller-manager [00e831a7f17f9744ac51975616391805b1019d06981030a869f2c23d6def410f] <==
	E0130 19:30:19.597255       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0130 19:30:23.540461       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0130 19:30:23.540537       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0130 19:30:27.915841       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0130 19:30:27.915895       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0130 19:30:31.051729       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0130 19:30:31.051923       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0130 19:30:54.129971       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0130 19:30:54.130064       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0130 19:30:55.535531       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0130 19:30:55.535639       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0130 19:31:11.712469       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0130 19:31:11.712520       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0130 19:31:12.074722       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0130 19:31:12.074813       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0130 19:31:21.519588       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0130 19:31:21.565285       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-xbxd2"
	I0130 19:31:21.585931       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="67.434276ms"
	I0130 19:31:21.606835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="20.778221ms"
	I0130 19:31:21.607206       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="123.136µs"
	I0130 19:31:25.034564       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0130 19:31:25.040738       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="6.149µs"
	I0130 19:31:25.047206       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0130 19:31:26.017876       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="10.851996ms"
	I0130 19:31:26.018033       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="63.461µs"
	
	
	==> kube-proxy [1458563d98f8a121b6821f0be7eefb43e8e9b31ad947c2f2bd1761f21e8ce8ce] <==
	I0130 19:26:17.677167       1 server_others.go:69] "Using iptables proxy"
	I0130 19:26:18.005403       1 node.go:141] Successfully retrieved node IP: 192.168.39.252
	I0130 19:26:19.502539       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0130 19:26:19.502641       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0130 19:26:19.510002       1 server_others.go:152] "Using iptables Proxier"
	I0130 19:26:19.510091       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0130 19:26:19.510261       1 server.go:846] "Version info" version="v1.28.4"
	I0130 19:26:19.510660       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 19:26:19.511981       1 config.go:188] "Starting service config controller"
	I0130 19:26:19.512031       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0130 19:26:19.512140       1 config.go:97] "Starting endpoint slice config controller"
	I0130 19:26:19.512159       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0130 19:26:19.514800       1 config.go:315] "Starting node config controller"
	I0130 19:26:19.514842       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0130 19:26:19.619142       1 shared_informer.go:318] Caches are synced for node config
	I0130 19:26:19.619220       1 shared_informer.go:318] Caches are synced for service config
	I0130 19:26:19.619544       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a4010b25ed628eb49e325aeaa91bbb572f4dbe6b885a27dd7f83d94a87ed3613] <==
	W0130 19:25:44.304147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0130 19:25:44.304175       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0130 19:25:44.304227       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0130 19:25:44.304255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0130 19:25:44.304566       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 19:25:44.304607       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0130 19:25:45.146909       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0130 19:25:45.146959       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0130 19:25:45.191974       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0130 19:25:45.191995       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0130 19:25:45.258543       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0130 19:25:45.258595       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0130 19:25:45.390612       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0130 19:25:45.390705       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0130 19:25:45.403648       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0130 19:25:45.403725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0130 19:25:45.426236       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0130 19:25:45.426376       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 19:25:45.470596       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0130 19:25:45.470675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0130 19:25:45.471231       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 19:25:45.471280       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0130 19:25:45.475527       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0130 19:25:45.475613       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0130 19:25:48.668439       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 19:25:16 UTC, ends at Tue 2024-01-30 19:31:33 UTC. --
	Jan 30 19:31:21 addons-663262 kubelet[1255]: I0130 19:31:21.580169    1255 memory_manager.go:346] "RemoveStaleState removing state" podUID="a6749615-fe63-4919-92ba-16122bdc9608" containerName="csi-snapshotter"
	Jan 30 19:31:21 addons-663262 kubelet[1255]: I0130 19:31:21.580176    1255 memory_manager.go:346] "RemoveStaleState removing state" podUID="a6749615-fe63-4919-92ba-16122bdc9608" containerName="liveness-probe"
	Jan 30 19:31:21 addons-663262 kubelet[1255]: I0130 19:31:21.580182    1255 memory_manager.go:346] "RemoveStaleState removing state" podUID="a6749615-fe63-4919-92ba-16122bdc9608" containerName="hostpath"
	Jan 30 19:31:21 addons-663262 kubelet[1255]: I0130 19:31:21.580188    1255 memory_manager.go:346] "RemoveStaleState removing state" podUID="c94d98ee-92e9-4c31-a29b-97f4764327b1" containerName="csi-attacher"
	Jan 30 19:31:21 addons-663262 kubelet[1255]: I0130 19:31:21.580194    1255 memory_manager.go:346] "RemoveStaleState removing state" podUID="73d6810d-15b6-44f5-bea4-451ed50846e9" containerName="csi-resizer"
	Jan 30 19:31:21 addons-663262 kubelet[1255]: I0130 19:31:21.613220    1255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4fc9505d-88a3-4bf7-a36b-9c82aafc6767-gcp-creds\") pod \"hello-world-app-5d77478584-xbxd2\" (UID: \"4fc9505d-88a3-4bf7-a36b-9c82aafc6767\") " pod="default/hello-world-app-5d77478584-xbxd2"
	Jan 30 19:31:21 addons-663262 kubelet[1255]: I0130 19:31:21.613289    1255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w4h6\" (UniqueName: \"kubernetes.io/projected/4fc9505d-88a3-4bf7-a36b-9c82aafc6767-kube-api-access-5w4h6\") pod \"hello-world-app-5d77478584-xbxd2\" (UID: \"4fc9505d-88a3-4bf7-a36b-9c82aafc6767\") " pod="default/hello-world-app-5d77478584-xbxd2"
	Jan 30 19:31:22 addons-663262 kubelet[1255]: I0130 19:31:22.976976    1255 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeed07f8ff805250082e44b144ef44b3464079639b1921d0d1efe594f7c5fe17"
	Jan 30 19:31:23 addons-663262 kubelet[1255]: I0130 19:31:23.123914    1255 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qqkx\" (UniqueName: \"kubernetes.io/projected/1d47e36b-b93a-4940-bd3f-de7a05ece7ed-kube-api-access-7qqkx\") pod \"1d47e36b-b93a-4940-bd3f-de7a05ece7ed\" (UID: \"1d47e36b-b93a-4940-bd3f-de7a05ece7ed\") "
	Jan 30 19:31:23 addons-663262 kubelet[1255]: I0130 19:31:23.127140    1255 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d47e36b-b93a-4940-bd3f-de7a05ece7ed-kube-api-access-7qqkx" (OuterVolumeSpecName: "kube-api-access-7qqkx") pod "1d47e36b-b93a-4940-bd3f-de7a05ece7ed" (UID: "1d47e36b-b93a-4940-bd3f-de7a05ece7ed"). InnerVolumeSpecName "kube-api-access-7qqkx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 30 19:31:23 addons-663262 kubelet[1255]: I0130 19:31:23.224748    1255 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7qqkx\" (UniqueName: \"kubernetes.io/projected/1d47e36b-b93a-4940-bd3f-de7a05ece7ed-kube-api-access-7qqkx\") on node \"addons-663262\" DevicePath \"\""
	Jan 30 19:31:25 addons-663262 kubelet[1255]: I0130 19:31:25.662094    1255 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1d47e36b-b93a-4940-bd3f-de7a05ece7ed" path="/var/lib/kubelet/pods/1d47e36b-b93a-4940-bd3f-de7a05ece7ed/volumes"
	Jan 30 19:31:25 addons-663262 kubelet[1255]: I0130 19:31:25.662633    1255 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ed8a3816-aae3-4b52-bb2c-ac44b0bc00a8" path="/var/lib/kubelet/pods/ed8a3816-aae3-4b52-bb2c-ac44b0bc00a8/volumes"
	Jan 30 19:31:25 addons-663262 kubelet[1255]: I0130 19:31:25.663010    1255 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ff13e8c3-aed8-46ec-807b-449a1334966a" path="/var/lib/kubelet/pods/ff13e8c3-aed8-46ec-807b-449a1334966a/volumes"
	Jan 30 19:31:28 addons-663262 kubelet[1255]: I0130 19:31:28.362238    1255 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b942ef6e-1f90-409a-8294-cd04852c9ca5-webhook-cert\") pod \"b942ef6e-1f90-409a-8294-cd04852c9ca5\" (UID: \"b942ef6e-1f90-409a-8294-cd04852c9ca5\") "
	Jan 30 19:31:28 addons-663262 kubelet[1255]: I0130 19:31:28.362305    1255 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b8ng\" (UniqueName: \"kubernetes.io/projected/b942ef6e-1f90-409a-8294-cd04852c9ca5-kube-api-access-2b8ng\") pod \"b942ef6e-1f90-409a-8294-cd04852c9ca5\" (UID: \"b942ef6e-1f90-409a-8294-cd04852c9ca5\") "
	Jan 30 19:31:28 addons-663262 kubelet[1255]: I0130 19:31:28.372746    1255 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b942ef6e-1f90-409a-8294-cd04852c9ca5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b942ef6e-1f90-409a-8294-cd04852c9ca5" (UID: "b942ef6e-1f90-409a-8294-cd04852c9ca5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 30 19:31:28 addons-663262 kubelet[1255]: I0130 19:31:28.373575    1255 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b942ef6e-1f90-409a-8294-cd04852c9ca5-kube-api-access-2b8ng" (OuterVolumeSpecName: "kube-api-access-2b8ng") pod "b942ef6e-1f90-409a-8294-cd04852c9ca5" (UID: "b942ef6e-1f90-409a-8294-cd04852c9ca5"). InnerVolumeSpecName "kube-api-access-2b8ng". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 30 19:31:28 addons-663262 kubelet[1255]: I0130 19:31:28.462582    1255 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b942ef6e-1f90-409a-8294-cd04852c9ca5-webhook-cert\") on node \"addons-663262\" DevicePath \"\""
	Jan 30 19:31:28 addons-663262 kubelet[1255]: I0130 19:31:28.462616    1255 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2b8ng\" (UniqueName: \"kubernetes.io/projected/b942ef6e-1f90-409a-8294-cd04852c9ca5-kube-api-access-2b8ng\") on node \"addons-663262\" DevicePath \"\""
	Jan 30 19:31:29 addons-663262 kubelet[1255]: I0130 19:31:29.008559    1255 scope.go:117] "RemoveContainer" containerID="59fd1d6be6683c436f342678c33d5889cb06c7c179186a67378654ce03fc79e6"
	Jan 30 19:31:29 addons-663262 kubelet[1255]: I0130 19:31:29.049236    1255 scope.go:117] "RemoveContainer" containerID="59fd1d6be6683c436f342678c33d5889cb06c7c179186a67378654ce03fc79e6"
	Jan 30 19:31:29 addons-663262 kubelet[1255]: E0130 19:31:29.049906    1255 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59fd1d6be6683c436f342678c33d5889cb06c7c179186a67378654ce03fc79e6\": container with ID starting with 59fd1d6be6683c436f342678c33d5889cb06c7c179186a67378654ce03fc79e6 not found: ID does not exist" containerID="59fd1d6be6683c436f342678c33d5889cb06c7c179186a67378654ce03fc79e6"
	Jan 30 19:31:29 addons-663262 kubelet[1255]: I0130 19:31:29.049946    1255 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59fd1d6be6683c436f342678c33d5889cb06c7c179186a67378654ce03fc79e6"} err="failed to get container status \"59fd1d6be6683c436f342678c33d5889cb06c7c179186a67378654ce03fc79e6\": rpc error: code = NotFound desc = could not find container \"59fd1d6be6683c436f342678c33d5889cb06c7c179186a67378654ce03fc79e6\": container with ID starting with 59fd1d6be6683c436f342678c33d5889cb06c7c179186a67378654ce03fc79e6 not found: ID does not exist"
	Jan 30 19:31:29 addons-663262 kubelet[1255]: I0130 19:31:29.661815    1255 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b942ef6e-1f90-409a-8294-cd04852c9ca5" path="/var/lib/kubelet/pods/b942ef6e-1f90-409a-8294-cd04852c9ca5/volumes"
	
	
	==> storage-provisioner [73fc71cf51ad74043cc0e01527317a7094610a5780bb372ef99af56ce9005efe] <==
	I0130 19:26:50.811922       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 19:26:50.851167       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 19:26:50.851235       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 19:26:50.870087       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 19:26:50.870249       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-663262_47c35aaa-314c-458a-9c14-4af8745e3b63!
	I0130 19:26:50.873534       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc947efe-54a0-4ee9-bbd2-df9af1c78e01", APIVersion:"v1", ResourceVersion:"975", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-663262_47c35aaa-314c-458a-9c14-4af8745e3b63 became leader
	I0130 19:26:50.974741       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-663262_47c35aaa-314c-458a-9c14-4af8745e3b63!
	
	
	==> storage-provisioner [ab341bde2fdbd71d3b601f8818e44c30e9b3630192bb6e63e2358254409c5675] <==
	I0130 19:26:20.232168       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0130 19:26:50.278154       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-663262 -n addons-663262
helpers_test.go:261: (dbg) Run:  kubectl --context addons-663262 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (158.06s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.13s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-663262
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-663262: exit status 82 (2m0.263998194s)

                                                
                                                
-- stdout --
	* Stopping node "addons-663262"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-663262" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-663262
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-663262: exit status 11 (21.576967267s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.252:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-663262" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-663262
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-663262: exit status 11 (6.143381751s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.252:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-663262" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-663262
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-663262: exit status 11 (6.143471489s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.252:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-663262" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-741304 /tmp/TestFunctionalserialCacheCmdcacheadd_local3874807204/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 cache add minikube-local-cache-test:functional-741304
functional_test.go:1085: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741304 cache add minikube-local-cache-test:functional-741304: exit status 10 (937.850714ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: Failed to cache and load images: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/minikube-local-cache-test_functional-741304": write: unable to calculate manifest: blob sha256:57c8d01a343d9fed9488b01cef3eca66a843172cdd4d827e65b48e4be76e7b3c not found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_cache_2afa1f46100fe43c96502fe4c3cba7d663c0ca1a_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1087: failed to 'cache add' local image "minikube-local-cache-test:functional-741304". args "out/minikube-linux-amd64 -p functional-741304 cache add minikube-local-cache-test:functional-741304" err exit status 10
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 cache delete minikube-local-cache-test:functional-741304
functional_test.go:1090: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741304 cache delete minikube-local-cache-test:functional-741304: exit status 30 (62.235784ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: Failed to delete images: remove /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/minikube-local-cache-test_functional-741304: no such file or directory
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_cache_06365dad8d9f67cbedf98bea98c443327f21cb29_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1092: failed to 'cache delete' local image "minikube-local-cache-test:functional-741304". args "out/minikube-linux-amd64 -p functional-741304 cache delete minikube-local-cache-test:functional-741304" err exit status 30
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-741304
--- FAIL: TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 image load --daemon gcr.io/google-containers/addon-resizer:functional-741304 --alsologtostderr
functional_test.go:354: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741304 image load --daemon gcr.io/google-containers/addon-resizer:functional-741304 --alsologtostderr: exit status 80 (1.130104351s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:38:23.832559   19744 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:38:23.832717   19744 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:23.832728   19744 out.go:309] Setting ErrFile to fd 2...
	I0130 19:38:23.832736   19744 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:23.833035   19744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 19:38:23.833857   19744 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:38:23.833948   19744 cache.go:107] acquiring lock: {Name:mk3f8c10c95aaf383f7e02c3e78f179a6f00a3aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:38:23.834190   19744 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-741304
	I0130 19:38:23.836089   19744 image.go:173] found gcr.io/google-containers/addon-resizer:functional-741304 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-741304 original:gcr.io/google-containers/addon-resizer:functional-741304} opener:0xc0005a43f0 tarballImage:<nil> computed:false id:0xc00071e5e0 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0130 19:38:23.836126   19744 cache.go:162] opening:  /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304
	I0130 19:38:24.880940   19744 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-741304" -> "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304" took 1.047003601s
	I0130 19:38:24.882914   19744 out.go:177] 
	W0130 19:38:24.884111   19744 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0130 19:38:24.884125   19744 out.go:239] * 
	* 
	W0130 19:38:24.885882   19744 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 19:38:24.887078   19744 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:356: loading image into minikube from daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:38:23.832559   19744 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:38:23.832717   19744 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:23.832728   19744 out.go:309] Setting ErrFile to fd 2...
	I0130 19:38:23.832736   19744 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:23.833035   19744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 19:38:23.833857   19744 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:38:23.833948   19744 cache.go:107] acquiring lock: {Name:mk3f8c10c95aaf383f7e02c3e78f179a6f00a3aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:38:23.834190   19744 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-741304
	I0130 19:38:23.836089   19744 image.go:173] found gcr.io/google-containers/addon-resizer:functional-741304 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-741304 original:gcr.io/google-containers/addon-resizer:functional-741304} opener:0xc0005a43f0 tarballImage:<nil> computed:false id:0xc00071e5e0 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0130 19:38:23.836126   19744 cache.go:162] opening:  /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304
	I0130 19:38:24.880940   19744 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-741304" -> "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304" took 1.047003601s
	I0130 19:38:24.882914   19744 out.go:177] 
	W0130 19:38:24.884111   19744 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0130 19:38:24.884125   19744 out.go:239] * 
	* 
	W0130 19:38:24.885882   19744 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 19:38:24.887078   19744 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 image load --daemon gcr.io/google-containers/addon-resizer:functional-741304 --alsologtostderr
functional_test.go:364: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741304 image load --daemon gcr.io/google-containers/addon-resizer:functional-741304 --alsologtostderr: exit status 80 (700.776748ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:38:24.946504   19869 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:38:24.946633   19869 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:24.946641   19869 out.go:309] Setting ErrFile to fd 2...
	I0130 19:38:24.946645   19869 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:24.946808   19869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 19:38:24.947382   19869 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:38:24.947438   19869 cache.go:107] acquiring lock: {Name:mk3f8c10c95aaf383f7e02c3e78f179a6f00a3aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:38:24.947526   19869 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-741304
	I0130 19:38:24.948963   19869 image.go:173] found gcr.io/google-containers/addon-resizer:functional-741304 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-741304 original:gcr.io/google-containers/addon-resizer:functional-741304} opener:0xc00046c070 tarballImage:<nil> computed:false id:0xc0009ea060 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0130 19:38:24.948986   19869 cache.go:162] opening:  /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304
	I0130 19:38:25.576362   19869 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-741304" -> "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304" took 628.932206ms
	I0130 19:38:25.578978   19869 out.go:177] 
	W0130 19:38:25.580274   19869 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0130 19:38:25.580287   19869 out.go:239] * 
	* 
	W0130 19:38:25.582063   19869 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 19:38:25.583521   19869 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:366: loading image into minikube from daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:38:24.946504   19869 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:38:24.946633   19869 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:24.946641   19869 out.go:309] Setting ErrFile to fd 2...
	I0130 19:38:24.946645   19869 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:24.946808   19869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 19:38:24.947382   19869 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:38:24.947438   19869 cache.go:107] acquiring lock: {Name:mk3f8c10c95aaf383f7e02c3e78f179a6f00a3aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:38:24.947526   19869 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-741304
	I0130 19:38:24.948963   19869 image.go:173] found gcr.io/google-containers/addon-resizer:functional-741304 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-741304 original:gcr.io/google-containers/addon-resizer:functional-741304} opener:0xc00046c070 tarballImage:<nil> computed:false id:0xc0009ea060 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0130 19:38:24.948986   19869 cache.go:162] opening:  /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304
	I0130 19:38:25.576362   19869 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-741304" -> "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304" took 628.932206ms
	I0130 19:38:25.578978   19869 out.go:177] 
	W0130 19:38:25.580274   19869 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0130 19:38:25.580287   19869 out.go:239] * 
	* 
	W0130 19:38:25.582063   19869 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 19:38:25.583521   19869 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.023763662s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-741304
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 image load --daemon gcr.io/google-containers/addon-resizer:functional-741304 --alsologtostderr
functional_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741304 image load --daemon gcr.io/google-containers/addon-resizer:functional-741304 --alsologtostderr: exit status 80 (712.361367ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:38:27.693474   19915 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:38:27.693720   19915 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:27.693731   19915 out.go:309] Setting ErrFile to fd 2...
	I0130 19:38:27.693738   19915 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:27.693932   19915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 19:38:27.694483   19915 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:38:27.694558   19915 cache.go:107] acquiring lock: {Name:mk3f8c10c95aaf383f7e02c3e78f179a6f00a3aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:38:27.694657   19915 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-741304
	I0130 19:38:27.696127   19915 image.go:173] found gcr.io/google-containers/addon-resizer:functional-741304 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-741304 original:gcr.io/google-containers/addon-resizer:functional-741304} opener:0xc00081c000 tarballImage:<nil> computed:false id:0xc00088e100 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0130 19:38:27.696166   19915 cache.go:162] opening:  /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304
	I0130 19:38:28.333725   19915 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-741304" -> "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304" took 639.178122ms
	I0130 19:38:28.336013   19915 out.go:177] 
	W0130 19:38:28.337349   19915 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	W0130 19:38:28.337369   19915 out.go:239] * 
	* 
	W0130 19:38:28.339785   19915 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 19:38:28.341060   19915 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:246: loading image into minikube from daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:38:27.693474   19915 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:38:27.693720   19915 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:27.693731   19915 out.go:309] Setting ErrFile to fd 2...
	I0130 19:38:27.693738   19915 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:27.693932   19915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 19:38:27.694483   19915 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:38:27.694558   19915 cache.go:107] acquiring lock: {Name:mk3f8c10c95aaf383f7e02c3e78f179a6f00a3aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:38:27.694657   19915 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-741304
	I0130 19:38:27.696127   19915 image.go:173] found gcr.io/google-containers/addon-resizer:functional-741304 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-741304 original:gcr.io/google-containers/addon-resizer:functional-741304} opener:0xc00081c000 tarballImage:<nil> computed:false id:0xc00088e100 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0130 19:38:27.696166   19915 cache.go:162] opening:  /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304
	I0130 19:38:28.333725   19915 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-741304" -> "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304" took 639.178122ms
	I0130 19:38:28.336013   19915 out.go:177] 
	W0130 19:38:28.337349   19915 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	W0130 19:38:28.337369   19915 out.go:239] * 
	* 
	W0130 19:38:28.339785   19915 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 19:38:28.341060   19915 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 image save gcr.io/google-containers/addon-resizer:functional-741304 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0130 19:38:29.218397   20008 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:38:29.218646   20008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:29.218655   20008 out.go:309] Setting ErrFile to fd 2...
	I0130 19:38:29.218659   20008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:29.218863   20008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 19:38:29.219460   20008 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:38:29.219559   20008 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:38:29.219902   20008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:38:29.219947   20008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:38:29.233654   20008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38461
	I0130 19:38:29.234086   20008 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:38:29.234561   20008 main.go:141] libmachine: Using API Version  1
	I0130 19:38:29.234582   20008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:38:29.234901   20008 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:38:29.235102   20008 main.go:141] libmachine: (functional-741304) Calling .GetState
	I0130 19:38:29.236785   20008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:38:29.236820   20008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:38:29.250008   20008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46309
	I0130 19:38:29.250330   20008 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:38:29.250721   20008 main.go:141] libmachine: Using API Version  1
	I0130 19:38:29.250739   20008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:38:29.251015   20008 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:38:29.251162   20008 main.go:141] libmachine: (functional-741304) Calling .DriverName
	I0130 19:38:29.251378   20008 ssh_runner.go:195] Run: systemctl --version
	I0130 19:38:29.251405   20008 main.go:141] libmachine: (functional-741304) Calling .GetSSHHostname
	I0130 19:38:29.253910   20008 main.go:141] libmachine: (functional-741304) DBG | domain functional-741304 has defined MAC address 52:54:00:44:f0:f0 in network mk-functional-741304
	I0130 19:38:29.254283   20008 main.go:141] libmachine: (functional-741304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:f0", ip: ""} in network mk-functional-741304: {Iface:virbr1 ExpiryTime:2024-01-30 20:35:42 +0000 UTC Type:0 Mac:52:54:00:44:f0:f0 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:functional-741304 Clientid:01:52:54:00:44:f0:f0}
	I0130 19:38:29.254313   20008 main.go:141] libmachine: (functional-741304) DBG | domain functional-741304 has defined IP address 192.168.50.230 and MAC address 52:54:00:44:f0:f0 in network mk-functional-741304
	I0130 19:38:29.254430   20008 main.go:141] libmachine: (functional-741304) Calling .GetSSHPort
	I0130 19:38:29.254585   20008 main.go:141] libmachine: (functional-741304) Calling .GetSSHKeyPath
	I0130 19:38:29.254737   20008 main.go:141] libmachine: (functional-741304) Calling .GetSSHUsername
	I0130 19:38:29.254844   20008 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/functional-741304/id_rsa Username:docker}
	I0130 19:38:29.346382   20008 cache_images.go:286] Loading image from: /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar
	W0130 19:38:29.346446   20008 cache_images.go:254] Failed to load cached images for profile functional-741304. make sure the profile is running. loading images: stat /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar: no such file or directory
	I0130 19:38:29.346465   20008 cache_images.go:262] succeeded pushing to: 
	I0130 19:38:29.346469   20008 cache_images.go:263] failed pushing to: functional-741304
	I0130 19:38:29.346513   20008 main.go:141] libmachine: Making call to close driver server
	I0130 19:38:29.346528   20008 main.go:141] libmachine: (functional-741304) Calling .Close
	I0130 19:38:29.346776   20008 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:38:29.346807   20008 main.go:141] libmachine: (functional-741304) DBG | Closing plugin on server side
	I0130 19:38:29.346831   20008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:38:29.346857   20008 main.go:141] libmachine: Making call to close driver server
	I0130 19:38:29.346872   20008 main.go:141] libmachine: (functional-741304) Calling .Close
	I0130 19:38:29.347043   20008 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:38:29.347056   20008 main.go:141] libmachine: Making call to close connection to plugin binary

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-741304
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 image save --daemon gcr.io/google-containers/addon-resizer:functional-741304 --alsologtostderr
functional_test.go:423: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741304 image save --daemon gcr.io/google-containers/addon-resizer:functional-741304 --alsologtostderr: exit status 80 (236.970205ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:38:29.425224   20041 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:38:29.425362   20041 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:29.425371   20041 out.go:309] Setting ErrFile to fd 2...
	I0130 19:38:29.425376   20041 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:29.425583   20041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 19:38:29.426192   20041 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:38:29.426219   20041 cache_images.go:396] Save images: ["gcr.io/google-containers/addon-resizer:functional-741304"]
	I0130 19:38:29.426317   20041 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:38:29.426686   20041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:38:29.426732   20041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:38:29.440584   20041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33035
	I0130 19:38:29.441021   20041 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:38:29.441572   20041 main.go:141] libmachine: Using API Version  1
	I0130 19:38:29.441592   20041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:38:29.441996   20041 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:38:29.442213   20041 main.go:141] libmachine: (functional-741304) Calling .GetState
	I0130 19:38:29.444053   20041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:38:29.444098   20041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:38:29.458276   20041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38007
	I0130 19:38:29.458649   20041 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:38:29.459129   20041 main.go:141] libmachine: Using API Version  1
	I0130 19:38:29.459154   20041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:38:29.459477   20041 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:38:29.459681   20041 main.go:141] libmachine: (functional-741304) Calling .DriverName
	I0130 19:38:29.459828   20041 cache_images.go:341] SaveImages start: [gcr.io/google-containers/addon-resizer:functional-741304]
	I0130 19:38:29.459904   20041 ssh_runner.go:195] Run: systemctl --version
	I0130 19:38:29.459923   20041 main.go:141] libmachine: (functional-741304) Calling .GetSSHHostname
	I0130 19:38:29.462202   20041 main.go:141] libmachine: (functional-741304) DBG | domain functional-741304 has defined MAC address 52:54:00:44:f0:f0 in network mk-functional-741304
	I0130 19:38:29.462563   20041 main.go:141] libmachine: (functional-741304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:f0", ip: ""} in network mk-functional-741304: {Iface:virbr1 ExpiryTime:2024-01-30 20:35:42 +0000 UTC Type:0 Mac:52:54:00:44:f0:f0 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:functional-741304 Clientid:01:52:54:00:44:f0:f0}
	I0130 19:38:29.462594   20041 main.go:141] libmachine: (functional-741304) DBG | domain functional-741304 has defined IP address 192.168.50.230 and MAC address 52:54:00:44:f0:f0 in network mk-functional-741304
	I0130 19:38:29.462707   20041 main.go:141] libmachine: (functional-741304) Calling .GetSSHPort
	I0130 19:38:29.462860   20041 main.go:141] libmachine: (functional-741304) Calling .GetSSHKeyPath
	I0130 19:38:29.463009   20041 main.go:141] libmachine: (functional-741304) Calling .GetSSHUsername
	I0130 19:38:29.463106   20041 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/functional-741304/id_rsa Username:docker}
	I0130 19:38:29.553098   20041 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/google-containers/addon-resizer:functional-741304
	I0130 19:38:29.599748   20041 cache_images.go:345] SaveImages completed in 139.902498ms
	W0130 19:38:29.599771   20041 cache_images.go:442] Failed to load cached images for profile functional-741304. make sure the profile is running. saving cached images: image gcr.io/google-containers/addon-resizer:functional-741304 not found
	I0130 19:38:29.599783   20041 cache_images.go:450] succeeded pulling from : 
	I0130 19:38:29.599787   20041 cache_images.go:451] failed pulling from : functional-741304
	I0130 19:38:29.599813   20041 main.go:141] libmachine: Making call to close driver server
	I0130 19:38:29.599828   20041 main.go:141] libmachine: (functional-741304) Calling .Close
	I0130 19:38:29.600082   20041 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:38:29.600103   20041 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:38:29.600113   20041 main.go:141] libmachine: Making call to close driver server
	I0130 19:38:29.600122   20041 main.go:141] libmachine: (functional-741304) Calling .Close
	I0130 19:38:29.600130   20041 main.go:141] libmachine: (functional-741304) DBG | Closing plugin on server side
	I0130 19:38:29.600362   20041 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:38:29.600384   20041 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:38:29.600380   20041 main.go:141] libmachine: (functional-741304) DBG | Closing plugin on server side
	I0130 19:38:29.602785   20041 out.go:177] 
	W0130 19:38:29.604152   20041 out.go:239] X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304: no such file or directory
	X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304: no such file or directory
	W0130 19:38:29.604169   20041 out.go:239] * 
	* 
	W0130 19:38:29.605879   20041 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 19:38:29.607140   20041 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:425: saving image from minikube to daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:38:29.425224   20041 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:38:29.425362   20041 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:29.425371   20041 out.go:309] Setting ErrFile to fd 2...
	I0130 19:38:29.425376   20041 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:29.425583   20041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 19:38:29.426192   20041 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:38:29.426219   20041 cache_images.go:396] Save images: ["gcr.io/google-containers/addon-resizer:functional-741304"]
	I0130 19:38:29.426317   20041 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:38:29.426686   20041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:38:29.426732   20041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:38:29.440584   20041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33035
	I0130 19:38:29.441021   20041 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:38:29.441572   20041 main.go:141] libmachine: Using API Version  1
	I0130 19:38:29.441592   20041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:38:29.441996   20041 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:38:29.442213   20041 main.go:141] libmachine: (functional-741304) Calling .GetState
	I0130 19:38:29.444053   20041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:38:29.444098   20041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:38:29.458276   20041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38007
	I0130 19:38:29.458649   20041 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:38:29.459129   20041 main.go:141] libmachine: Using API Version  1
	I0130 19:38:29.459154   20041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:38:29.459477   20041 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:38:29.459681   20041 main.go:141] libmachine: (functional-741304) Calling .DriverName
	I0130 19:38:29.459828   20041 cache_images.go:341] SaveImages start: [gcr.io/google-containers/addon-resizer:functional-741304]
	I0130 19:38:29.459904   20041 ssh_runner.go:195] Run: systemctl --version
	I0130 19:38:29.459923   20041 main.go:141] libmachine: (functional-741304) Calling .GetSSHHostname
	I0130 19:38:29.462202   20041 main.go:141] libmachine: (functional-741304) DBG | domain functional-741304 has defined MAC address 52:54:00:44:f0:f0 in network mk-functional-741304
	I0130 19:38:29.462563   20041 main.go:141] libmachine: (functional-741304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:f0", ip: ""} in network mk-functional-741304: {Iface:virbr1 ExpiryTime:2024-01-30 20:35:42 +0000 UTC Type:0 Mac:52:54:00:44:f0:f0 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:functional-741304 Clientid:01:52:54:00:44:f0:f0}
	I0130 19:38:29.462594   20041 main.go:141] libmachine: (functional-741304) DBG | domain functional-741304 has defined IP address 192.168.50.230 and MAC address 52:54:00:44:f0:f0 in network mk-functional-741304
	I0130 19:38:29.462707   20041 main.go:141] libmachine: (functional-741304) Calling .GetSSHPort
	I0130 19:38:29.462860   20041 main.go:141] libmachine: (functional-741304) Calling .GetSSHKeyPath
	I0130 19:38:29.463009   20041 main.go:141] libmachine: (functional-741304) Calling .GetSSHUsername
	I0130 19:38:29.463106   20041 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/functional-741304/id_rsa Username:docker}
	I0130 19:38:29.553098   20041 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/google-containers/addon-resizer:functional-741304
	I0130 19:38:29.599748   20041 cache_images.go:345] SaveImages completed in 139.902498ms
	W0130 19:38:29.599771   20041 cache_images.go:442] Failed to load cached images for profile functional-741304. make sure the profile is running. saving cached images: image gcr.io/google-containers/addon-resizer:functional-741304 not found
	I0130 19:38:29.599783   20041 cache_images.go:450] succeeded pulling from : 
	I0130 19:38:29.599787   20041 cache_images.go:451] failed pulling from : functional-741304
	I0130 19:38:29.599813   20041 main.go:141] libmachine: Making call to close driver server
	I0130 19:38:29.599828   20041 main.go:141] libmachine: (functional-741304) Calling .Close
	I0130 19:38:29.600082   20041 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:38:29.600103   20041 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:38:29.600113   20041 main.go:141] libmachine: Making call to close driver server
	I0130 19:38:29.600122   20041 main.go:141] libmachine: (functional-741304) Calling .Close
	I0130 19:38:29.600130   20041 main.go:141] libmachine: (functional-741304) DBG | Closing plugin on server side
	I0130 19:38:29.600362   20041 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:38:29.600384   20041 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:38:29.600380   20041 main.go:141] libmachine: (functional-741304) DBG | Closing plugin on server side
	I0130 19:38:29.602785   20041 out.go:177] 
	W0130 19:38:29.604152   20041 out.go:239] X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304: no such file or directory
	X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-741304: no such file or directory
	W0130 19:38:29.604169   20041 out.go:239] * 
	* 
	W0130 19:38:29.605879   20041 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 19:38:29.607140   20041 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.26s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (174.97s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-223875 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-223875 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.039169356s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-223875 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-223875 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [29ff00c7-92d9-4039-a43b-c50db0872e7f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [29ff00c7-92d9-4039-a43b-c50db0872e7f] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.003830923s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-223875 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0130 19:43:07.773452   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 19:43:07.778753   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 19:43:07.789002   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 19:43:07.809251   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 19:43:07.849492   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 19:43:07.929787   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 19:43:08.090192   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 19:43:08.410739   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 19:43:09.051732   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 19:43:10.332222   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 19:43:12.893156   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 19:43:18.013531   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 19:43:28.254269   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 19:43:39.711185   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
E0130 19:43:48.734741   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 19:44:07.396915   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-223875 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.439335162s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-223875 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-223875 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.152
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-223875 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-223875 addons disable ingress-dns --alsologtostderr -v=1: (4.141655973s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-223875 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-223875 addons disable ingress --alsologtostderr -v=1: (7.593640639s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-223875 -n ingress-addon-legacy-223875
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-223875 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-223875 logs -n 25: (1.099948202s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                    |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-741304 image load --daemon                                     | functional-741304           | jenkins | v1.32.0 | 30 Jan 24 19:38 UTC |                     |
	|                | gcr.io/google-containers/addon-resizer:functional-741304                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-741304 image load --daemon                                     | functional-741304           | jenkins | v1.32.0 | 30 Jan 24 19:38 UTC |                     |
	|                | gcr.io/google-containers/addon-resizer:functional-741304                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-741304 image save                                              | functional-741304           | jenkins | v1.32.0 | 30 Jan 24 19:38 UTC | 30 Jan 24 19:38 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-741304                  |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-741304 image rm                                                | functional-741304           | jenkins | v1.32.0 | 30 Jan 24 19:38 UTC | 30 Jan 24 19:38 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-741304                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-741304 image ls                                                | functional-741304           | jenkins | v1.32.0 | 30 Jan 24 19:38 UTC | 30 Jan 24 19:38 UTC |
	| image          | functional-741304 image load                                              | functional-741304           | jenkins | v1.32.0 | 30 Jan 24 19:38 UTC | 30 Jan 24 19:38 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-741304 image save --daemon                                     | functional-741304           | jenkins | v1.32.0 | 30 Jan 24 19:38 UTC |                     |
	|                | gcr.io/google-containers/addon-resizer:functional-741304                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| update-context | functional-741304                                                         | functional-741304           | jenkins | v1.32.0 | 30 Jan 24 19:38 UTC | 30 Jan 24 19:38 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-741304                                                         | functional-741304           | jenkins | v1.32.0 | 30 Jan 24 19:38 UTC | 30 Jan 24 19:38 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-741304                                                         | functional-741304           | jenkins | v1.32.0 | 30 Jan 24 19:38 UTC | 30 Jan 24 19:38 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| image          | functional-741304                                                         | functional-741304           | jenkins | v1.32.0 | 30 Jan 24 19:38 UTC | 30 Jan 24 19:38 UTC |
	|                | image ls --format short                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-741304                                                         | functional-741304           | jenkins | v1.32.0 | 30 Jan 24 19:38 UTC | 30 Jan 24 19:38 UTC |
	|                | image ls --format yaml                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| ssh            | functional-741304 ssh pgrep                                               | functional-741304           | jenkins | v1.32.0 | 30 Jan 24 19:38 UTC |                     |
	|                | buildkitd                                                                 |                             |         |         |                     |                     |
	| image          | functional-741304 image build -t                                          | functional-741304           | jenkins | v1.32.0 | 30 Jan 24 19:38 UTC | 30 Jan 24 19:38 UTC |
	|                | localhost/my-image:functional-741304                                      |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                          |                             |         |         |                     |                     |
	| image          | functional-741304 image ls                                                | functional-741304           | jenkins | v1.32.0 | 30 Jan 24 19:38 UTC | 30 Jan 24 19:38 UTC |
	| image          | functional-741304                                                         | functional-741304           | jenkins | v1.32.0 | 30 Jan 24 19:38 UTC | 30 Jan 24 19:38 UTC |
	|                | image ls --format json                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-741304                                                         | functional-741304           | jenkins | v1.32.0 | 30 Jan 24 19:38 UTC | 30 Jan 24 19:38 UTC |
	|                | image ls --format table                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| delete         | -p functional-741304                                                      | functional-741304           | jenkins | v1.32.0 | 30 Jan 24 19:39 UTC | 30 Jan 24 19:39 UTC |
	| start          | -p ingress-addon-legacy-223875                                            | ingress-addon-legacy-223875 | jenkins | v1.32.0 | 30 Jan 24 19:39 UTC | 30 Jan 24 19:41 UTC |
	|                | --kubernetes-version=v1.18.20                                             |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-223875                                               | ingress-addon-legacy-223875 | jenkins | v1.32.0 | 30 Jan 24 19:41 UTC | 30 Jan 24 19:41 UTC |
	|                | addons enable ingress                                                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-223875                                               | ingress-addon-legacy-223875 | jenkins | v1.32.0 | 30 Jan 24 19:41 UTC | 30 Jan 24 19:41 UTC |
	|                | addons enable ingress-dns                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-223875                                               | ingress-addon-legacy-223875 | jenkins | v1.32.0 | 30 Jan 24 19:41 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                             |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                              |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-223875 ip                                            | ingress-addon-legacy-223875 | jenkins | v1.32.0 | 30 Jan 24 19:44 UTC | 30 Jan 24 19:44 UTC |
	| addons         | ingress-addon-legacy-223875                                               | ingress-addon-legacy-223875 | jenkins | v1.32.0 | 30 Jan 24 19:44 UTC | 30 Jan 24 19:44 UTC |
	|                | addons disable ingress-dns                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-223875                                               | ingress-addon-legacy-223875 | jenkins | v1.32.0 | 30 Jan 24 19:44 UTC | 30 Jan 24 19:44 UTC |
	|                | addons disable ingress                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 19:39:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 19:39:05.414205   20626 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:39:05.414331   20626 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:39:05.414339   20626 out.go:309] Setting ErrFile to fd 2...
	I0130 19:39:05.414344   20626 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:39:05.414533   20626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 19:39:05.415071   20626 out.go:303] Setting JSON to false
	I0130 19:39:05.415888   20626 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1291,"bootTime":1706642255,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 19:39:05.415945   20626 start.go:138] virtualization: kvm guest
	I0130 19:39:05.418120   20626 out.go:177] * [ingress-addon-legacy-223875] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 19:39:05.419569   20626 out.go:177]   - MINIKUBE_LOCATION=18007
	I0130 19:39:05.420902   20626 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 19:39:05.419594   20626 notify.go:220] Checking for updates...
	I0130 19:39:05.423597   20626 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 19:39:05.424873   20626 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 19:39:05.426090   20626 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 19:39:05.427351   20626 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 19:39:05.428682   20626 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 19:39:05.463214   20626 out.go:177] * Using the kvm2 driver based on user configuration
	I0130 19:39:05.464350   20626 start.go:298] selected driver: kvm2
	I0130 19:39:05.464365   20626 start.go:902] validating driver "kvm2" against <nil>
	I0130 19:39:05.464375   20626 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 19:39:05.465020   20626 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:39:05.465117   20626 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18007-4458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 19:39:05.479450   20626 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 19:39:05.479507   20626 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0130 19:39:05.479697   20626 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 19:39:05.479755   20626 cni.go:84] Creating CNI manager for ""
	I0130 19:39:05.479767   20626 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 19:39:05.479775   20626 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0130 19:39:05.479786   20626 start_flags.go:321] config:
	{Name:ingress-addon-legacy-223875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-223875 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 19:39:05.479899   20626 iso.go:125] acquiring lock: {Name:mk072ab123730f3058e85a91672f85e887bd47af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:39:05.481714   20626 out.go:177] * Starting control plane node ingress-addon-legacy-223875 in cluster ingress-addon-legacy-223875
	I0130 19:39:05.483075   20626 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0130 19:39:05.588664   20626 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0130 19:39:05.588690   20626 cache.go:56] Caching tarball of preloaded images
	I0130 19:39:05.588831   20626 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0130 19:39:05.590589   20626 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0130 19:39:05.591927   20626 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0130 19:39:05.703342   20626 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0130 19:39:27.701375   20626 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0130 19:39:27.701473   20626 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0130 19:39:28.677776   20626 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0130 19:39:28.678091   20626 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/config.json ...
	I0130 19:39:28.678117   20626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/config.json: {Name:mk9802b7baa4513a8d42b47b2c934007100c9648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:39:28.678277   20626 start.go:365] acquiring machines lock for ingress-addon-legacy-223875: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 19:39:28.678308   20626 start.go:369] acquired machines lock for "ingress-addon-legacy-223875" in 15.974µs
	I0130 19:39:28.678323   20626 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-223875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-223875 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 19:39:28.678387   20626 start.go:125] createHost starting for "" (driver="kvm2")
	I0130 19:39:28.681562   20626 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0130 19:39:28.681761   20626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:39:28.681796   20626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:39:28.695562   20626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41301
	I0130 19:39:28.695938   20626 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:39:28.696537   20626 main.go:141] libmachine: Using API Version  1
	I0130 19:39:28.696556   20626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:39:28.696845   20626 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:39:28.697067   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetMachineName
	I0130 19:39:28.697212   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .DriverName
	I0130 19:39:28.697391   20626 start.go:159] libmachine.API.Create for "ingress-addon-legacy-223875" (driver="kvm2")
	I0130 19:39:28.697428   20626 client.go:168] LocalClient.Create starting
	I0130 19:39:28.697459   20626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem
	I0130 19:39:28.697497   20626 main.go:141] libmachine: Decoding PEM data...
	I0130 19:39:28.697519   20626 main.go:141] libmachine: Parsing certificate...
	I0130 19:39:28.697583   20626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem
	I0130 19:39:28.697609   20626 main.go:141] libmachine: Decoding PEM data...
	I0130 19:39:28.697625   20626 main.go:141] libmachine: Parsing certificate...
	I0130 19:39:28.697648   20626 main.go:141] libmachine: Running pre-create checks...
	I0130 19:39:28.697664   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .PreCreateCheck
	I0130 19:39:28.697961   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetConfigRaw
	I0130 19:39:28.698432   20626 main.go:141] libmachine: Creating machine...
	I0130 19:39:28.698452   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .Create
	I0130 19:39:28.698565   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Creating KVM machine...
	I0130 19:39:28.699823   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found existing default KVM network
	I0130 19:39:28.700434   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:28.700316   20707 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a10}
	I0130 19:39:28.705577   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | trying to create private KVM network mk-ingress-addon-legacy-223875 192.168.39.0/24...
	I0130 19:39:28.772949   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Setting up store path in /home/jenkins/minikube-integration/18007-4458/.minikube/machines/ingress-addon-legacy-223875 ...
	I0130 19:39:28.772979   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | private KVM network mk-ingress-addon-legacy-223875 192.168.39.0/24 created
	I0130 19:39:28.772994   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Building disk image from file:///home/jenkins/minikube-integration/18007-4458/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0130 19:39:28.773017   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:28.772877   20707 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 19:39:28.773065   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Downloading /home/jenkins/minikube-integration/18007-4458/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18007-4458/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0130 19:39:28.978542   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:28.978421   20707 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/ingress-addon-legacy-223875/id_rsa...
	I0130 19:39:29.297376   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:29.297234   20707 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/ingress-addon-legacy-223875/ingress-addon-legacy-223875.rawdisk...
	I0130 19:39:29.297412   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | Writing magic tar header
	I0130 19:39:29.297434   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | Writing SSH key tar header
	I0130 19:39:29.297456   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:29.297346   20707 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18007-4458/.minikube/machines/ingress-addon-legacy-223875 ...
	I0130 19:39:29.297475   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Setting executable bit set on /home/jenkins/minikube-integration/18007-4458/.minikube/machines/ingress-addon-legacy-223875 (perms=drwx------)
	I0130 19:39:29.297488   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Setting executable bit set on /home/jenkins/minikube-integration/18007-4458/.minikube/machines (perms=drwxr-xr-x)
	I0130 19:39:29.297495   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Setting executable bit set on /home/jenkins/minikube-integration/18007-4458/.minikube (perms=drwxr-xr-x)
	I0130 19:39:29.297504   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Setting executable bit set on /home/jenkins/minikube-integration/18007-4458 (perms=drwxrwxr-x)
	I0130 19:39:29.297512   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0130 19:39:29.297526   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0130 19:39:29.297538   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Creating domain...
	I0130 19:39:29.297555   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/ingress-addon-legacy-223875
	I0130 19:39:29.297569   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18007-4458/.minikube/machines
	I0130 19:39:29.297578   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 19:39:29.297586   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18007-4458
	I0130 19:39:29.297608   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0130 19:39:29.297649   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | Checking permissions on dir: /home/jenkins
	I0130 19:39:29.297678   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | Checking permissions on dir: /home
	I0130 19:39:29.297697   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | Skipping /home - not owner
	I0130 19:39:29.298583   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) define libvirt domain using xml: 
	I0130 19:39:29.298608   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) <domain type='kvm'>
	I0130 19:39:29.298618   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)   <name>ingress-addon-legacy-223875</name>
	I0130 19:39:29.298625   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)   <memory unit='MiB'>4096</memory>
	I0130 19:39:29.298641   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)   <vcpu>2</vcpu>
	I0130 19:39:29.298655   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)   <features>
	I0130 19:39:29.298666   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     <acpi/>
	I0130 19:39:29.298679   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     <apic/>
	I0130 19:39:29.298691   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     <pae/>
	I0130 19:39:29.298703   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     
	I0130 19:39:29.298712   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)   </features>
	I0130 19:39:29.298721   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)   <cpu mode='host-passthrough'>
	I0130 19:39:29.298727   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)   
	I0130 19:39:29.298734   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)   </cpu>
	I0130 19:39:29.298740   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)   <os>
	I0130 19:39:29.298754   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     <type>hvm</type>
	I0130 19:39:29.298763   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     <boot dev='cdrom'/>
	I0130 19:39:29.298768   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     <boot dev='hd'/>
	I0130 19:39:29.298778   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     <bootmenu enable='no'/>
	I0130 19:39:29.298788   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)   </os>
	I0130 19:39:29.298828   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)   <devices>
	I0130 19:39:29.298855   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     <disk type='file' device='cdrom'>
	I0130 19:39:29.298875   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)       <source file='/home/jenkins/minikube-integration/18007-4458/.minikube/machines/ingress-addon-legacy-223875/boot2docker.iso'/>
	I0130 19:39:29.298896   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)       <target dev='hdc' bus='scsi'/>
	I0130 19:39:29.298913   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)       <readonly/>
	I0130 19:39:29.298929   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     </disk>
	I0130 19:39:29.298943   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     <disk type='file' device='disk'>
	I0130 19:39:29.298957   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0130 19:39:29.298977   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)       <source file='/home/jenkins/minikube-integration/18007-4458/.minikube/machines/ingress-addon-legacy-223875/ingress-addon-legacy-223875.rawdisk'/>
	I0130 19:39:29.298990   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)       <target dev='hda' bus='virtio'/>
	I0130 19:39:29.299000   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     </disk>
	I0130 19:39:29.299014   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     <interface type='network'>
	I0130 19:39:29.299028   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)       <source network='mk-ingress-addon-legacy-223875'/>
	I0130 19:39:29.299043   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)       <model type='virtio'/>
	I0130 19:39:29.299054   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     </interface>
	I0130 19:39:29.299078   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     <interface type='network'>
	I0130 19:39:29.299097   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)       <source network='default'/>
	I0130 19:39:29.299108   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)       <model type='virtio'/>
	I0130 19:39:29.299113   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     </interface>
	I0130 19:39:29.299123   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     <serial type='pty'>
	I0130 19:39:29.299136   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)       <target port='0'/>
	I0130 19:39:29.299145   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     </serial>
	I0130 19:39:29.299150   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     <console type='pty'>
	I0130 19:39:29.299158   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)       <target type='serial' port='0'/>
	I0130 19:39:29.299169   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     </console>
	I0130 19:39:29.299178   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     <rng model='virtio'>
	I0130 19:39:29.299185   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)       <backend model='random'>/dev/random</backend>
	I0130 19:39:29.299193   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     </rng>
	I0130 19:39:29.299198   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     
	I0130 19:39:29.299204   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)     
	I0130 19:39:29.299212   20626 main.go:141] libmachine: (ingress-addon-legacy-223875)   </devices>
	I0130 19:39:29.299218   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) </domain>
	I0130 19:39:29.299225   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) 
	I0130 19:39:29.303625   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:f0:0e:b9 in network default
	I0130 19:39:29.304193   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Ensuring networks are active...
	I0130 19:39:29.304218   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:29.304860   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Ensuring network default is active
	I0130 19:39:29.305207   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Ensuring network mk-ingress-addon-legacy-223875 is active
	I0130 19:39:29.305850   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Getting domain xml...
	I0130 19:39:29.306478   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Creating domain...
	I0130 19:39:30.478661   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Waiting to get IP...
	I0130 19:39:30.479412   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:30.479740   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | unable to find current IP address of domain ingress-addon-legacy-223875 in network mk-ingress-addon-legacy-223875
	I0130 19:39:30.479777   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:30.479732   20707 retry.go:31] will retry after 297.827824ms: waiting for machine to come up
	I0130 19:39:30.779221   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:30.779732   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | unable to find current IP address of domain ingress-addon-legacy-223875 in network mk-ingress-addon-legacy-223875
	I0130 19:39:30.779765   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:30.779679   20707 retry.go:31] will retry after 383.177664ms: waiting for machine to come up
	I0130 19:39:31.164186   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:31.164693   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | unable to find current IP address of domain ingress-addon-legacy-223875 in network mk-ingress-addon-legacy-223875
	I0130 19:39:31.164720   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:31.164652   20707 retry.go:31] will retry after 414.494549ms: waiting for machine to come up
	I0130 19:39:31.581136   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:31.581550   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | unable to find current IP address of domain ingress-addon-legacy-223875 in network mk-ingress-addon-legacy-223875
	I0130 19:39:31.581581   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:31.581498   20707 retry.go:31] will retry after 388.661229ms: waiting for machine to come up
	I0130 19:39:31.972081   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:31.972503   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | unable to find current IP address of domain ingress-addon-legacy-223875 in network mk-ingress-addon-legacy-223875
	I0130 19:39:31.972529   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:31.972447   20707 retry.go:31] will retry after 605.204254ms: waiting for machine to come up
	I0130 19:39:32.579379   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:32.580076   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | unable to find current IP address of domain ingress-addon-legacy-223875 in network mk-ingress-addon-legacy-223875
	I0130 19:39:32.580105   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:32.580013   20707 retry.go:31] will retry after 587.347476ms: waiting for machine to come up
	I0130 19:39:33.168489   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:33.168901   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | unable to find current IP address of domain ingress-addon-legacy-223875 in network mk-ingress-addon-legacy-223875
	I0130 19:39:33.168921   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:33.168874   20707 retry.go:31] will retry after 738.847925ms: waiting for machine to come up
	I0130 19:39:33.909108   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:33.909588   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | unable to find current IP address of domain ingress-addon-legacy-223875 in network mk-ingress-addon-legacy-223875
	I0130 19:39:33.909617   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:33.909546   20707 retry.go:31] will retry after 1.151502619s: waiting for machine to come up
	I0130 19:39:35.062889   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:35.063370   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | unable to find current IP address of domain ingress-addon-legacy-223875 in network mk-ingress-addon-legacy-223875
	I0130 19:39:35.063394   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:35.063320   20707 retry.go:31] will retry after 1.539412096s: waiting for machine to come up
	I0130 19:39:36.604984   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:36.605419   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | unable to find current IP address of domain ingress-addon-legacy-223875 in network mk-ingress-addon-legacy-223875
	I0130 19:39:36.605493   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:36.605364   20707 retry.go:31] will retry after 1.689225317s: waiting for machine to come up
	I0130 19:39:38.297161   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:38.297644   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | unable to find current IP address of domain ingress-addon-legacy-223875 in network mk-ingress-addon-legacy-223875
	I0130 19:39:38.297677   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:38.297612   20707 retry.go:31] will retry after 2.169478824s: waiting for machine to come up
	I0130 19:39:40.468516   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:40.477445   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | unable to find current IP address of domain ingress-addon-legacy-223875 in network mk-ingress-addon-legacy-223875
	I0130 19:39:40.477474   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:40.468906   20707 retry.go:31] will retry after 3.241966386s: waiting for machine to come up
	I0130 19:39:43.712329   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:43.712766   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | unable to find current IP address of domain ingress-addon-legacy-223875 in network mk-ingress-addon-legacy-223875
	I0130 19:39:43.712788   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:43.712723   20707 retry.go:31] will retry after 3.497360487s: waiting for machine to come up
	I0130 19:39:47.214342   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:47.214680   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | unable to find current IP address of domain ingress-addon-legacy-223875 in network mk-ingress-addon-legacy-223875
	I0130 19:39:47.214710   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | I0130 19:39:47.214626   20707 retry.go:31] will retry after 5.108373518s: waiting for machine to come up
	I0130 19:39:52.325267   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:52.325666   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Found IP for machine: 192.168.39.152
	I0130 19:39:52.325688   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Reserving static IP address...
	I0130 19:39:52.325703   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has current primary IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:52.325994   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-223875", mac: "52:54:00:7d:b6:74", ip: "192.168.39.152"} in network mk-ingress-addon-legacy-223875
	I0130 19:39:52.393633   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | Getting to WaitForSSH function...
	I0130 19:39:52.393671   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Reserved static IP address: 192.168.39.152
	I0130 19:39:52.393716   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Waiting for SSH to be available...
	I0130 19:39:52.395918   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:52.396255   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7d:b6:74}
	I0130 19:39:52.396291   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:52.396357   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | Using SSH client type: external
	I0130 19:39:52.396383   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/ingress-addon-legacy-223875/id_rsa (-rw-------)
	I0130 19:39:52.396412   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.152 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/ingress-addon-legacy-223875/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 19:39:52.396428   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | About to run SSH command:
	I0130 19:39:52.396442   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | exit 0
	I0130 19:39:52.486802   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | SSH cmd err, output: <nil>: 
	I0130 19:39:52.487029   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) KVM machine creation complete!
	I0130 19:39:52.487398   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetConfigRaw
	I0130 19:39:52.487865   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .DriverName
	I0130 19:39:52.488067   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .DriverName
	I0130 19:39:52.488231   20626 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0130 19:39:52.488245   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetState
	I0130 19:39:52.489381   20626 main.go:141] libmachine: Detecting operating system of created instance...
	I0130 19:39:52.489395   20626 main.go:141] libmachine: Waiting for SSH to be available...
	I0130 19:39:52.489400   20626 main.go:141] libmachine: Getting to WaitForSSH function...
	I0130 19:39:52.489407   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHHostname
	I0130 19:39:52.491505   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:52.491802   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ingress-addon-legacy-223875 Clientid:01:52:54:00:7d:b6:74}
	I0130 19:39:52.491835   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:52.491963   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHPort
	I0130 19:39:52.492098   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHKeyPath
	I0130 19:39:52.492207   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHKeyPath
	I0130 19:39:52.492327   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHUsername
	I0130 19:39:52.492463   20626 main.go:141] libmachine: Using SSH client type: native
	I0130 19:39:52.492847   20626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0130 19:39:52.492861   20626 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0130 19:39:52.610251   20626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 19:39:52.610285   20626 main.go:141] libmachine: Detecting the provisioner...
	I0130 19:39:52.610294   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHHostname
	I0130 19:39:52.613109   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:52.613495   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ingress-addon-legacy-223875 Clientid:01:52:54:00:7d:b6:74}
	I0130 19:39:52.613529   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:52.613730   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHPort
	I0130 19:39:52.613923   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHKeyPath
	I0130 19:39:52.614104   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHKeyPath
	I0130 19:39:52.614231   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHUsername
	I0130 19:39:52.614379   20626 main.go:141] libmachine: Using SSH client type: native
	I0130 19:39:52.614676   20626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0130 19:39:52.614689   20626 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0130 19:39:52.732573   20626 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0130 19:39:52.732653   20626 main.go:141] libmachine: found compatible host: buildroot
	I0130 19:39:52.732664   20626 main.go:141] libmachine: Provisioning with buildroot...
	I0130 19:39:52.732675   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetMachineName
	I0130 19:39:52.732924   20626 buildroot.go:166] provisioning hostname "ingress-addon-legacy-223875"
	I0130 19:39:52.732952   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetMachineName
	I0130 19:39:52.733143   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHHostname
	I0130 19:39:52.735503   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:52.735796   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ingress-addon-legacy-223875 Clientid:01:52:54:00:7d:b6:74}
	I0130 19:39:52.735821   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:52.735924   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHPort
	I0130 19:39:52.736103   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHKeyPath
	I0130 19:39:52.736261   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHKeyPath
	I0130 19:39:52.736410   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHUsername
	I0130 19:39:52.736579   20626 main.go:141] libmachine: Using SSH client type: native
	I0130 19:39:52.736927   20626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0130 19:39:52.736942   20626 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-223875 && echo "ingress-addon-legacy-223875" | sudo tee /etc/hostname
	I0130 19:39:52.867752   20626 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-223875
	
	I0130 19:39:52.867781   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHHostname
	I0130 19:39:52.870422   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:52.870731   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ingress-addon-legacy-223875 Clientid:01:52:54:00:7d:b6:74}
	I0130 19:39:52.870765   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:52.870890   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHPort
	I0130 19:39:52.871082   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHKeyPath
	I0130 19:39:52.871210   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHKeyPath
	I0130 19:39:52.871362   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHUsername
	I0130 19:39:52.871496   20626 main.go:141] libmachine: Using SSH client type: native
	I0130 19:39:52.871787   20626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0130 19:39:52.871805   20626 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-223875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-223875/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-223875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 19:39:52.994187   20626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 19:39:52.994211   20626 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 19:39:52.994226   20626 buildroot.go:174] setting up certificates
	I0130 19:39:52.994240   20626 provision.go:83] configureAuth start
	I0130 19:39:52.994252   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetMachineName
	I0130 19:39:52.994499   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetIP
	I0130 19:39:52.996766   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:52.997113   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ingress-addon-legacy-223875 Clientid:01:52:54:00:7d:b6:74}
	I0130 19:39:52.997138   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:52.997283   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHHostname
	I0130 19:39:52.999485   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:52.999775   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ingress-addon-legacy-223875 Clientid:01:52:54:00:7d:b6:74}
	I0130 19:39:52.999802   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:52.999962   20626 provision.go:138] copyHostCerts
	I0130 19:39:52.999993   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 19:39:53.000026   20626 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 19:39:53.000036   20626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 19:39:53.000099   20626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 19:39:53.000169   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 19:39:53.000191   20626 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 19:39:53.000197   20626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 19:39:53.000221   20626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 19:39:53.000264   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 19:39:53.000279   20626 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 19:39:53.000285   20626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 19:39:53.000303   20626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 19:39:53.000345   20626 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-223875 san=[192.168.39.152 192.168.39.152 localhost 127.0.0.1 minikube ingress-addon-legacy-223875]
	I0130 19:39:53.151530   20626 provision.go:172] copyRemoteCerts
	I0130 19:39:53.151580   20626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 19:39:53.151601   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHHostname
	I0130 19:39:53.154159   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:53.154481   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ingress-addon-legacy-223875 Clientid:01:52:54:00:7d:b6:74}
	I0130 19:39:53.154511   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:53.154660   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHPort
	I0130 19:39:53.154835   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHKeyPath
	I0130 19:39:53.154998   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHUsername
	I0130 19:39:53.155097   20626 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/ingress-addon-legacy-223875/id_rsa Username:docker}
	I0130 19:39:53.244975   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0130 19:39:53.245043   20626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0130 19:39:53.268174   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0130 19:39:53.268230   20626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 19:39:53.290187   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0130 19:39:53.290249   20626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 19:39:53.310644   20626 provision.go:86] duration metric: configureAuth took 316.393639ms
	I0130 19:39:53.310662   20626 buildroot.go:189] setting minikube options for container-runtime
	I0130 19:39:53.310862   20626 config.go:182] Loaded profile config "ingress-addon-legacy-223875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0130 19:39:53.310938   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHHostname
	I0130 19:39:53.313273   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:53.313565   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ingress-addon-legacy-223875 Clientid:01:52:54:00:7d:b6:74}
	I0130 19:39:53.313599   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:53.313703   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHPort
	I0130 19:39:53.313878   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHKeyPath
	I0130 19:39:53.314021   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHKeyPath
	I0130 19:39:53.314155   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHUsername
	I0130 19:39:53.314307   20626 main.go:141] libmachine: Using SSH client type: native
	I0130 19:39:53.314604   20626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0130 19:39:53.314619   20626 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 19:39:53.636242   20626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 19:39:53.636275   20626 main.go:141] libmachine: Checking connection to Docker...
	I0130 19:39:53.636287   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetURL
	I0130 19:39:53.637617   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | Using libvirt version 6000000
	I0130 19:39:53.639568   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:53.639908   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ingress-addon-legacy-223875 Clientid:01:52:54:00:7d:b6:74}
	I0130 19:39:53.639938   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:53.640096   20626 main.go:141] libmachine: Docker is up and running!
	I0130 19:39:53.640110   20626 main.go:141] libmachine: Reticulating splines...
	I0130 19:39:53.640117   20626 client.go:171] LocalClient.Create took 24.942681721s
	I0130 19:39:53.640142   20626 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-223875" took 24.94275058s
	I0130 19:39:53.640166   20626 start.go:300] post-start starting for "ingress-addon-legacy-223875" (driver="kvm2")
	I0130 19:39:53.640180   20626 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 19:39:53.640202   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .DriverName
	I0130 19:39:53.640441   20626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 19:39:53.640466   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHHostname
	I0130 19:39:53.642659   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:53.642958   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ingress-addon-legacy-223875 Clientid:01:52:54:00:7d:b6:74}
	I0130 19:39:53.642994   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:53.643093   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHPort
	I0130 19:39:53.643243   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHKeyPath
	I0130 19:39:53.643402   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHUsername
	I0130 19:39:53.643531   20626 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/ingress-addon-legacy-223875/id_rsa Username:docker}
	I0130 19:39:53.735823   20626 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 19:39:53.740213   20626 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 19:39:53.740233   20626 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 19:39:53.740298   20626 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 19:39:53.740385   20626 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 19:39:53.740397   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> /etc/ssl/certs/116672.pem
	I0130 19:39:53.740516   20626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 19:39:53.748655   20626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 19:39:53.770903   20626 start.go:303] post-start completed in 130.724367ms
	I0130 19:39:53.770967   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetConfigRaw
	I0130 19:39:53.771556   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetIP
	I0130 19:39:53.773988   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:53.774329   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ingress-addon-legacy-223875 Clientid:01:52:54:00:7d:b6:74}
	I0130 19:39:53.774364   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:53.774535   20626 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/config.json ...
	I0130 19:39:53.774692   20626 start.go:128] duration metric: createHost completed in 25.096297463s
	I0130 19:39:53.774711   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHHostname
	I0130 19:39:53.776733   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:53.777018   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ingress-addon-legacy-223875 Clientid:01:52:54:00:7d:b6:74}
	I0130 19:39:53.777038   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:53.777177   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHPort
	I0130 19:39:53.777328   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHKeyPath
	I0130 19:39:53.777488   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHKeyPath
	I0130 19:39:53.777614   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHUsername
	I0130 19:39:53.777778   20626 main.go:141] libmachine: Using SSH client type: native
	I0130 19:39:53.778221   20626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0130 19:39:53.778236   20626 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 19:39:53.895566   20626 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706643593.866300430
	
	I0130 19:39:53.895591   20626 fix.go:206] guest clock: 1706643593.866300430
	I0130 19:39:53.895600   20626 fix.go:219] Guest: 2024-01-30 19:39:53.86630043 +0000 UTC Remote: 2024-01-30 19:39:53.774701964 +0000 UTC m=+48.406582377 (delta=91.598466ms)
	I0130 19:39:53.895623   20626 fix.go:190] guest clock delta is within tolerance: 91.598466ms
	I0130 19:39:53.895632   20626 start.go:83] releasing machines lock for "ingress-addon-legacy-223875", held for 25.217317196s
	I0130 19:39:53.895657   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .DriverName
	I0130 19:39:53.895922   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetIP
	I0130 19:39:53.898277   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:53.898542   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ingress-addon-legacy-223875 Clientid:01:52:54:00:7d:b6:74}
	I0130 19:39:53.898573   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:53.898666   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .DriverName
	I0130 19:39:53.899191   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .DriverName
	I0130 19:39:53.899370   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .DriverName
	I0130 19:39:53.899433   20626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 19:39:53.899461   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHHostname
	I0130 19:39:53.899556   20626 ssh_runner.go:195] Run: cat /version.json
	I0130 19:39:53.899578   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHHostname
	I0130 19:39:53.901953   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:53.902204   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:53.902279   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ingress-addon-legacy-223875 Clientid:01:52:54:00:7d:b6:74}
	I0130 19:39:53.902306   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:53.902441   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHPort
	I0130 19:39:53.902608   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHKeyPath
	I0130 19:39:53.902631   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ingress-addon-legacy-223875 Clientid:01:52:54:00:7d:b6:74}
	I0130 19:39:53.902670   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:53.902757   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHUsername
	I0130 19:39:53.902828   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHPort
	I0130 19:39:53.902886   20626 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/ingress-addon-legacy-223875/id_rsa Username:docker}
	I0130 19:39:53.902952   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHKeyPath
	I0130 19:39:53.903098   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHUsername
	I0130 19:39:53.903242   20626 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/ingress-addon-legacy-223875/id_rsa Username:docker}
	I0130 19:39:54.011832   20626 ssh_runner.go:195] Run: systemctl --version
	I0130 19:39:54.017482   20626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 19:39:54.175485   20626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 19:39:54.182374   20626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 19:39:54.182435   20626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 19:39:54.196301   20626 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 19:39:54.196326   20626 start.go:475] detecting cgroup driver to use...
	I0130 19:39:54.196378   20626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 19:39:54.211321   20626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 19:39:54.222474   20626 docker.go:217] disabling cri-docker service (if available) ...
	I0130 19:39:54.222528   20626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 19:39:54.233809   20626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 19:39:54.245085   20626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 19:39:54.352176   20626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 19:39:54.472065   20626 docker.go:233] disabling docker service ...
	I0130 19:39:54.472129   20626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 19:39:54.485158   20626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 19:39:54.496110   20626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 19:39:54.607803   20626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 19:39:54.722316   20626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 19:39:54.734802   20626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 19:39:54.750975   20626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0130 19:39:54.751023   20626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 19:39:54.760494   20626 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 19:39:54.760552   20626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 19:39:54.769253   20626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 19:39:54.777659   20626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 19:39:54.786427   20626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 19:39:54.796322   20626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 19:39:54.804573   20626 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 19:39:54.804628   20626 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 19:39:54.818007   20626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 19:39:54.827918   20626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 19:39:54.949092   20626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 19:39:55.104606   20626 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 19:39:55.104663   20626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 19:39:55.109322   20626 start.go:543] Will wait 60s for crictl version
	I0130 19:39:55.109360   20626 ssh_runner.go:195] Run: which crictl
	I0130 19:39:55.112796   20626 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 19:39:55.149604   20626 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 19:39:55.149665   20626 ssh_runner.go:195] Run: crio --version
	I0130 19:39:55.205439   20626 ssh_runner.go:195] Run: crio --version
	I0130 19:39:55.252869   20626 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0130 19:39:55.254071   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetIP
	I0130 19:39:55.256558   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:55.256914   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ingress-addon-legacy-223875 Clientid:01:52:54:00:7d:b6:74}
	I0130 19:39:55.256945   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:39:55.257154   20626 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 19:39:55.260827   20626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 19:39:55.271605   20626 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0130 19:39:55.271661   20626 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 19:39:55.305558   20626 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0130 19:39:55.305620   20626 ssh_runner.go:195] Run: which lz4
	I0130 19:39:55.309123   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0130 19:39:55.309205   20626 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 19:39:55.313023   20626 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 19:39:55.313050   20626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0130 19:39:57.322131   20626 crio.go:444] Took 2.012943 seconds to copy over tarball
	I0130 19:39:57.322201   20626 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 19:40:00.326747   20626 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.004519959s)
	I0130 19:40:00.326771   20626 crio.go:451] Took 3.004616 seconds to extract the tarball
	I0130 19:40:00.326780   20626 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 19:40:00.371655   20626 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 19:40:00.425994   20626 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0130 19:40:00.684213   20626 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 19:40:00.684284   20626 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 19:40:00.684310   20626 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0130 19:40:00.684330   20626 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0130 19:40:00.684366   20626 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0130 19:40:00.684312   20626 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0130 19:40:00.684289   20626 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0130 19:40:00.684565   20626 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0130 19:40:00.684571   20626 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0130 19:40:00.685801   20626 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0130 19:40:00.685814   20626 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0130 19:40:00.685821   20626 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0130 19:40:00.685837   20626 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0130 19:40:00.685801   20626 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 19:40:00.685850   20626 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0130 19:40:00.685861   20626 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0130 19:40:00.685861   20626 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0130 19:40:00.872926   20626 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0130 19:40:00.912709   20626 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0130 19:40:00.912745   20626 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0130 19:40:00.912777   20626 ssh_runner.go:195] Run: which crictl
	I0130 19:40:00.916394   20626 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0130 19:40:00.941198   20626 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0130 19:40:00.950876   20626 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0130 19:40:00.985470   20626 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0130 19:40:00.985627   20626 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0130 19:40:00.985666   20626 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0130 19:40:00.985702   20626 ssh_runner.go:195] Run: which crictl
	I0130 19:40:00.991077   20626 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0130 19:40:00.998317   20626 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0130 19:40:01.004472   20626 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0130 19:40:01.009210   20626 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0130 19:40:01.090130   20626 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0130 19:40:01.090171   20626 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0130 19:40:01.090224   20626 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0130 19:40:01.090246   20626 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0130 19:40:01.090284   20626 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0130 19:40:01.090230   20626 ssh_runner.go:195] Run: which crictl
	I0130 19:40:01.090317   20626 ssh_runner.go:195] Run: which crictl
	I0130 19:40:01.109162   20626 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0130 19:40:01.109202   20626 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0130 19:40:01.109247   20626 ssh_runner.go:195] Run: which crictl
	I0130 19:40:01.132651   20626 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0130 19:40:01.132696   20626 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0130 19:40:01.132740   20626 ssh_runner.go:195] Run: which crictl
	I0130 19:40:01.136276   20626 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0130 19:40:01.136315   20626 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0130 19:40:01.136367   20626 ssh_runner.go:195] Run: which crictl
	I0130 19:40:01.136382   20626 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0130 19:40:01.136398   20626 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0130 19:40:01.162326   20626 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0130 19:40:01.162402   20626 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0130 19:40:01.162414   20626 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0130 19:40:01.197466   20626 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0130 19:40:01.197540   20626 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0130 19:40:01.231788   20626 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0130 19:40:01.249980   20626 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0130 19:40:01.253481   20626 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0130 19:40:01.266245   20626 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0130 19:40:01.628712   20626 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 19:40:01.776503   20626 cache_images.go:92] LoadImages completed in 1.092267496s
	W0130 19:40:01.776601   20626 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0130 19:40:01.776691   20626 ssh_runner.go:195] Run: crio config
	I0130 19:40:01.837351   20626 cni.go:84] Creating CNI manager for ""
	I0130 19:40:01.837377   20626 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 19:40:01.837398   20626 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 19:40:01.837426   20626 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.152 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-223875 NodeName:ingress-addon-legacy-223875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0130 19:40:01.837564   20626 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-223875"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 19:40:01.837635   20626 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-223875 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-223875 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 19:40:01.837683   20626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0130 19:40:01.847180   20626 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 19:40:01.847239   20626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 19:40:01.856179   20626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I0130 19:40:01.871343   20626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0130 19:40:01.886600   20626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I0130 19:40:01.902038   20626 ssh_runner.go:195] Run: grep 192.168.39.152	control-plane.minikube.internal$ /etc/hosts
	I0130 19:40:01.905439   20626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.152	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 19:40:01.916152   20626 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875 for IP: 192.168.39.152
	I0130 19:40:01.916180   20626 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:40:01.916331   20626 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 19:40:01.916394   20626 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 19:40:01.916442   20626 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.key
	I0130 19:40:01.916454   20626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt with IP's: []
	I0130 19:40:02.268776   20626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt ...
	I0130 19:40:02.268801   20626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: {Name:mk037691974f5697dd88713a5a954568c8ece9b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:40:02.268959   20626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.key ...
	I0130 19:40:02.268972   20626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.key: {Name:mk81c4787a526f90611389002561ae052ef38952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:40:02.269039   20626 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/apiserver.key.02b03154
	I0130 19:40:02.269053   20626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/apiserver.crt.02b03154 with IP's: [192.168.39.152 10.96.0.1 127.0.0.1 10.0.0.1]
	I0130 19:40:02.523131   20626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/apiserver.crt.02b03154 ...
	I0130 19:40:02.523162   20626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/apiserver.crt.02b03154: {Name:mk1d806496b27de6c6a724e06adc91f56b9379c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:40:02.523344   20626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/apiserver.key.02b03154 ...
	I0130 19:40:02.523360   20626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/apiserver.key.02b03154: {Name:mkf9f6e986cd4ae82026327bc7df428cb5da2781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:40:02.523434   20626 certs.go:337] copying /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/apiserver.crt.02b03154 -> /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/apiserver.crt
	I0130 19:40:02.523512   20626 certs.go:341] copying /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/apiserver.key.02b03154 -> /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/apiserver.key
	I0130 19:40:02.523565   20626 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/proxy-client.key
	I0130 19:40:02.523578   20626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/proxy-client.crt with IP's: []
	I0130 19:40:02.719959   20626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/proxy-client.crt ...
	I0130 19:40:02.719986   20626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/proxy-client.crt: {Name:mk2073231667cd37e630dc7b4800e899fdd88340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:40:02.720134   20626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/proxy-client.key ...
	I0130 19:40:02.720147   20626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/proxy-client.key: {Name:mk83579008ba5cd3f7ae381920ba88f5993f3959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:40:02.720207   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0130 19:40:02.720230   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0130 19:40:02.720244   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0130 19:40:02.720259   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0130 19:40:02.720275   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0130 19:40:02.720287   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0130 19:40:02.720305   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0130 19:40:02.720321   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0130 19:40:02.720375   20626 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 19:40:02.720411   20626 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 19:40:02.720421   20626 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 19:40:02.720446   20626 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 19:40:02.720467   20626 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 19:40:02.720493   20626 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 19:40:02.720531   20626 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 19:40:02.720554   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> /usr/share/ca-certificates/116672.pem
	I0130 19:40:02.720570   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0130 19:40:02.720578   20626 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem -> /usr/share/ca-certificates/11667.pem
	I0130 19:40:02.721176   20626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 19:40:02.744002   20626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 19:40:02.764962   20626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 19:40:02.785831   20626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 19:40:02.807131   20626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 19:40:02.828210   20626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 19:40:02.848911   20626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 19:40:02.869755   20626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 19:40:02.890968   20626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 19:40:02.911053   20626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 19:40:02.930777   20626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 19:40:02.950786   20626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 19:40:02.965787   20626 ssh_runner.go:195] Run: openssl version
	I0130 19:40:02.971063   20626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 19:40:02.981315   20626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 19:40:02.985504   20626 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 19:40:02.985543   20626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 19:40:02.990743   20626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 19:40:03.000385   20626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 19:40:03.009932   20626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 19:40:03.014100   20626 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 19:40:03.014146   20626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 19:40:03.018978   20626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 19:40:03.028485   20626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 19:40:03.037949   20626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 19:40:03.042039   20626 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 19:40:03.042069   20626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 19:40:03.047000   20626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 19:40:03.056647   20626 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 19:40:03.060515   20626 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0130 19:40:03.060552   20626 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-223875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-223875 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 19:40:03.060617   20626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 19:40:03.060649   20626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 19:40:03.095162   20626 cri.go:89] found id: ""
	I0130 19:40:03.095216   20626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 19:40:03.104596   20626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 19:40:03.113373   20626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 19:40:03.122328   20626 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 19:40:03.122368   20626 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0130 19:40:03.180756   20626 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0130 19:40:03.180947   20626 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 19:40:03.312992   20626 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 19:40:03.313101   20626 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 19:40:03.313194   20626 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 19:40:03.516512   20626 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 19:40:03.517416   20626 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 19:40:03.517488   20626 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 19:40:03.621056   20626 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 19:40:03.623244   20626 out.go:204]   - Generating certificates and keys ...
	I0130 19:40:03.623347   20626 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 19:40:03.623425   20626 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 19:40:04.076599   20626 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0130 19:40:04.246074   20626 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0130 19:40:04.451718   20626 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0130 19:40:04.693157   20626 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0130 19:40:04.929547   20626 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0130 19:40:04.930031   20626 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-223875 localhost] and IPs [192.168.39.152 127.0.0.1 ::1]
	I0130 19:40:05.027599   20626 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0130 19:40:05.030039   20626 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-223875 localhost] and IPs [192.168.39.152 127.0.0.1 ::1]
	I0130 19:40:05.244165   20626 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0130 19:40:05.324983   20626 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0130 19:40:05.388253   20626 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0130 19:40:05.388496   20626 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 19:40:05.813894   20626 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 19:40:06.025775   20626 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 19:40:06.235315   20626 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 19:40:06.898886   20626 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 19:40:06.900112   20626 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 19:40:06.901880   20626 out.go:204]   - Booting up control plane ...
	I0130 19:40:06.902036   20626 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 19:40:06.913365   20626 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 19:40:06.914651   20626 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 19:40:06.915845   20626 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 19:40:06.924954   20626 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 19:40:14.927183   20626 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002754 seconds
	I0130 19:40:14.927373   20626 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 19:40:14.940551   20626 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 19:40:15.470773   20626 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 19:40:15.470993   20626 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-223875 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0130 19:40:15.980398   20626 kubeadm.go:322] [bootstrap-token] Using token: ixfbd5.onpiolq0o7xnrar4
	I0130 19:40:15.982021   20626 out.go:204]   - Configuring RBAC rules ...
	I0130 19:40:15.982140   20626 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 19:40:15.990611   20626 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 19:40:16.001840   20626 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 19:40:16.004309   20626 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 19:40:16.007002   20626 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 19:40:16.011249   20626 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 19:40:16.022009   20626 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 19:40:16.289927   20626 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 19:40:16.409746   20626 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 19:40:16.410958   20626 kubeadm.go:322] 
	I0130 19:40:16.411024   20626 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 19:40:16.411046   20626 kubeadm.go:322] 
	I0130 19:40:16.411164   20626 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 19:40:16.411189   20626 kubeadm.go:322] 
	I0130 19:40:16.411227   20626 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 19:40:16.411337   20626 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 19:40:16.411383   20626 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 19:40:16.411390   20626 kubeadm.go:322] 
	I0130 19:40:16.411476   20626 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 19:40:16.411571   20626 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 19:40:16.411632   20626 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 19:40:16.411639   20626 kubeadm.go:322] 
	I0130 19:40:16.411705   20626 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 19:40:16.411794   20626 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 19:40:16.411806   20626 kubeadm.go:322] 
	I0130 19:40:16.411937   20626 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ixfbd5.onpiolq0o7xnrar4 \
	I0130 19:40:16.412066   20626 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 \
	I0130 19:40:16.412101   20626 kubeadm.go:322]     --control-plane 
	I0130 19:40:16.412108   20626 kubeadm.go:322] 
	I0130 19:40:16.412223   20626 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 19:40:16.412234   20626 kubeadm.go:322] 
	I0130 19:40:16.412338   20626 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ixfbd5.onpiolq0o7xnrar4 \
	I0130 19:40:16.412474   20626 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 
	I0130 19:40:16.412993   20626 kubeadm.go:322] W0130 19:40:03.163571     956 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0130 19:40:16.413158   20626 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 19:40:16.413344   20626 kubeadm.go:322] W0130 19:40:06.899740     956 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0130 19:40:16.413518   20626 kubeadm.go:322] W0130 19:40:06.901090     956 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0130 19:40:16.413536   20626 cni.go:84] Creating CNI manager for ""
	I0130 19:40:16.413546   20626 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 19:40:16.415087   20626 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 19:40:16.416445   20626 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 19:40:16.426499   20626 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 19:40:16.442614   20626 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 19:40:16.442693   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:16.442763   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218 minikube.k8s.io/name=ingress-addon-legacy-223875 minikube.k8s.io/updated_at=2024_01_30T19_40_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:16.614722   20626 ops.go:34] apiserver oom_adj: -16
	I0130 19:40:16.614831   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:17.115123   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:17.615677   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:18.115643   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:18.615379   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:19.115669   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:19.615231   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:20.115032   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:20.615003   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:21.115150   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:21.614919   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:22.115814   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:22.615616   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:23.115389   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:23.615819   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:24.114991   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:24.614990   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:25.115712   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:25.615730   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:26.115764   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:26.615191   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:27.115796   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:27.615163   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:28.115324   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:28.615816   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:29.115710   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:29.615678   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:30.115212   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:30.615776   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:31.115433   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:31.615671   20626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:40:31.704161   20626 kubeadm.go:1088] duration metric: took 15.261541579s to wait for elevateKubeSystemPrivileges.
	I0130 19:40:31.704202   20626 kubeadm.go:406] StartCluster complete in 28.643650679s
	I0130 19:40:31.704225   20626 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:40:31.704317   20626 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 19:40:31.704943   20626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:40:31.705199   20626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 19:40:31.705299   20626 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 19:40:31.705376   20626 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-223875"
	I0130 19:40:31.705389   20626 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-223875"
	I0130 19:40:31.705413   20626 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-223875"
	I0130 19:40:31.705425   20626 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-223875"
	I0130 19:40:31.705497   20626 host.go:66] Checking if "ingress-addon-legacy-223875" exists ...
	I0130 19:40:31.705424   20626 config.go:182] Loaded profile config "ingress-addon-legacy-223875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0130 19:40:31.705885   20626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:40:31.705914   20626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:40:31.705826   20626 kapi.go:59] client config for ingress-addon-legacy-223875: &rest.Config{Host:"https://192.168.39.152:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt", KeyFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.key", CAFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 19:40:31.705939   20626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:40:31.706063   20626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:40:31.706530   20626 cert_rotation.go:137] Starting client certificate rotation controller
	I0130 19:40:31.725391   20626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
	I0130 19:40:31.725494   20626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46847
	I0130 19:40:31.725780   20626 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:40:31.725917   20626 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:40:31.726244   20626 main.go:141] libmachine: Using API Version  1
	I0130 19:40:31.726261   20626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:40:31.726392   20626 main.go:141] libmachine: Using API Version  1
	I0130 19:40:31.726424   20626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:40:31.726599   20626 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:40:31.726815   20626 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:40:31.726966   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetState
	I0130 19:40:31.727141   20626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:40:31.727172   20626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:40:31.729295   20626 kapi.go:59] client config for ingress-addon-legacy-223875: &rest.Config{Host:"https://192.168.39.152:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt", KeyFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.key", CAFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 19:40:31.729542   20626 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-223875"
	I0130 19:40:31.729570   20626 host.go:66] Checking if "ingress-addon-legacy-223875" exists ...
	I0130 19:40:31.729910   20626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:40:31.729937   20626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:40:31.741636   20626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43823
	I0130 19:40:31.742005   20626 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:40:31.742408   20626 main.go:141] libmachine: Using API Version  1
	I0130 19:40:31.742431   20626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:40:31.742767   20626 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:40:31.742958   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetState
	I0130 19:40:31.744149   20626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43649
	I0130 19:40:31.744509   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .DriverName
	I0130 19:40:31.744530   20626 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:40:31.746348   20626 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 19:40:31.744920   20626 main.go:141] libmachine: Using API Version  1
	I0130 19:40:31.747868   20626 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 19:40:31.746374   20626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:40:31.747885   20626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 19:40:31.747903   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHHostname
	I0130 19:40:31.748219   20626 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:40:31.748786   20626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:40:31.748818   20626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:40:31.750765   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:40:31.751165   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ingress-addon-legacy-223875 Clientid:01:52:54:00:7d:b6:74}
	I0130 19:40:31.751193   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:40:31.751359   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHPort
	I0130 19:40:31.751543   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHKeyPath
	I0130 19:40:31.751687   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHUsername
	I0130 19:40:31.751795   20626 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/ingress-addon-legacy-223875/id_rsa Username:docker}
	I0130 19:40:31.762729   20626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38477
	I0130 19:40:31.763068   20626 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:40:31.763495   20626 main.go:141] libmachine: Using API Version  1
	I0130 19:40:31.763517   20626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:40:31.763806   20626 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:40:31.764024   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetState
	I0130 19:40:31.765390   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .DriverName
	I0130 19:40:31.765653   20626 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 19:40:31.765668   20626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 19:40:31.765685   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHHostname
	I0130 19:40:31.767928   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:40:31.768245   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:74", ip: ""} in network mk-ingress-addon-legacy-223875: {Iface:virbr1 ExpiryTime:2024-01-30 20:39:44 +0000 UTC Type:0 Mac:52:54:00:7d:b6:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ingress-addon-legacy-223875 Clientid:01:52:54:00:7d:b6:74}
	I0130 19:40:31.768272   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | domain ingress-addon-legacy-223875 has defined IP address 192.168.39.152 and MAC address 52:54:00:7d:b6:74 in network mk-ingress-addon-legacy-223875
	I0130 19:40:31.768397   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHPort
	I0130 19:40:31.768562   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHKeyPath
	I0130 19:40:31.768671   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .GetSSHUsername
	I0130 19:40:31.768792   20626 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/ingress-addon-legacy-223875/id_rsa Username:docker}
	I0130 19:40:31.959328   20626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 19:40:32.025289   20626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 19:40:32.127987   20626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 19:40:32.222542   20626 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-223875" context rescaled to 1 replicas
	I0130 19:40:32.222591   20626 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 19:40:32.226552   20626 out.go:177] * Verifying Kubernetes components...
	I0130 19:40:32.228108   20626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 19:40:32.848135   20626 main.go:141] libmachine: Making call to close driver server
	I0130 19:40:32.848166   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .Close
	I0130 19:40:32.848191   20626 main.go:141] libmachine: Making call to close driver server
	I0130 19:40:32.848213   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .Close
	I0130 19:40:32.848267   20626 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0130 19:40:32.848469   20626 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:40:32.848491   20626 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:40:32.848501   20626 main.go:141] libmachine: Making call to close driver server
	I0130 19:40:32.848510   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .Close
	I0130 19:40:32.848560   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | Closing plugin on server side
	I0130 19:40:32.848608   20626 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:40:32.848630   20626 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:40:32.848645   20626 main.go:141] libmachine: Making call to close driver server
	I0130 19:40:32.848654   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .Close
	I0130 19:40:32.848760   20626 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:40:32.848762   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | Closing plugin on server side
	I0130 19:40:32.848771   20626 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:40:32.849027   20626 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:40:32.849036   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) DBG | Closing plugin on server side
	I0130 19:40:32.849038   20626 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:40:32.849115   20626 kapi.go:59] client config for ingress-addon-legacy-223875: &rest.Config{Host:"https://192.168.39.152:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt", KeyFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.key", CAFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 19:40:32.849423   20626 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-223875" to be "Ready" ...
	I0130 19:40:32.878020   20626 node_ready.go:49] node "ingress-addon-legacy-223875" has status "Ready":"True"
	I0130 19:40:32.878045   20626 node_ready.go:38] duration metric: took 28.607274ms waiting for node "ingress-addon-legacy-223875" to be "Ready" ...
	I0130 19:40:32.878056   20626 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 19:40:32.890452   20626 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-5cpks" in "kube-system" namespace to be "Ready" ...
	I0130 19:40:32.895107   20626 main.go:141] libmachine: Making call to close driver server
	I0130 19:40:32.895126   20626 main.go:141] libmachine: (ingress-addon-legacy-223875) Calling .Close
	I0130 19:40:32.895390   20626 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:40:32.895413   20626 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:40:32.897317   20626 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0130 19:40:32.899191   20626 addons.go:505] enable addons completed in 1.19389156s: enabled=[storage-provisioner default-storageclass]
	I0130 19:40:34.896370   20626 pod_ready.go:102] pod "coredns-66bff467f8-5cpks" in "kube-system" namespace has status "Ready":"False"
	I0130 19:40:37.397626   20626 pod_ready.go:102] pod "coredns-66bff467f8-5cpks" in "kube-system" namespace has status "Ready":"False"
	I0130 19:40:39.397681   20626 pod_ready.go:102] pod "coredns-66bff467f8-5cpks" in "kube-system" namespace has status "Ready":"False"
	I0130 19:40:41.397795   20626 pod_ready.go:102] pod "coredns-66bff467f8-5cpks" in "kube-system" namespace has status "Ready":"False"
	I0130 19:40:43.398042   20626 pod_ready.go:102] pod "coredns-66bff467f8-5cpks" in "kube-system" namespace has status "Ready":"False"
	I0130 19:40:45.398116   20626 pod_ready.go:102] pod "coredns-66bff467f8-5cpks" in "kube-system" namespace has status "Ready":"False"
	I0130 19:40:47.897300   20626 pod_ready.go:102] pod "coredns-66bff467f8-5cpks" in "kube-system" namespace has status "Ready":"False"
	I0130 19:40:49.898210   20626 pod_ready.go:102] pod "coredns-66bff467f8-5cpks" in "kube-system" namespace has status "Ready":"False"
	I0130 19:40:52.396613   20626 pod_ready.go:102] pod "coredns-66bff467f8-5cpks" in "kube-system" namespace has status "Ready":"False"
	I0130 19:40:54.397729   20626 pod_ready.go:102] pod "coredns-66bff467f8-5cpks" in "kube-system" namespace has status "Ready":"False"
	I0130 19:40:56.397955   20626 pod_ready.go:102] pod "coredns-66bff467f8-5cpks" in "kube-system" namespace has status "Ready":"False"
	I0130 19:40:58.398224   20626 pod_ready.go:102] pod "coredns-66bff467f8-5cpks" in "kube-system" namespace has status "Ready":"False"
	I0130 19:41:00.898468   20626 pod_ready.go:102] pod "coredns-66bff467f8-5cpks" in "kube-system" namespace has status "Ready":"False"
	I0130 19:41:03.398180   20626 pod_ready.go:102] pod "coredns-66bff467f8-5cpks" in "kube-system" namespace has status "Ready":"False"
	I0130 19:41:05.900129   20626 pod_ready.go:102] pod "coredns-66bff467f8-5cpks" in "kube-system" namespace has status "Ready":"False"
	I0130 19:41:08.397786   20626 pod_ready.go:102] pod "coredns-66bff467f8-5cpks" in "kube-system" namespace has status "Ready":"False"
	I0130 19:41:10.398025   20626 pod_ready.go:102] pod "coredns-66bff467f8-5cpks" in "kube-system" namespace has status "Ready":"False"
	I0130 19:41:12.397514   20626 pod_ready.go:92] pod "coredns-66bff467f8-5cpks" in "kube-system" namespace has status "Ready":"True"
	I0130 19:41:12.397534   20626 pod_ready.go:81] duration metric: took 39.507050612s waiting for pod "coredns-66bff467f8-5cpks" in "kube-system" namespace to be "Ready" ...
	I0130 19:41:12.397543   20626 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-223875" in "kube-system" namespace to be "Ready" ...
	I0130 19:41:12.402643   20626 pod_ready.go:92] pod "etcd-ingress-addon-legacy-223875" in "kube-system" namespace has status "Ready":"True"
	I0130 19:41:12.402662   20626 pod_ready.go:81] duration metric: took 5.113386ms waiting for pod "etcd-ingress-addon-legacy-223875" in "kube-system" namespace to be "Ready" ...
	I0130 19:41:12.402671   20626 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-223875" in "kube-system" namespace to be "Ready" ...
	I0130 19:41:12.407539   20626 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-223875" in "kube-system" namespace has status "Ready":"True"
	I0130 19:41:12.407557   20626 pod_ready.go:81] duration metric: took 4.880302ms waiting for pod "kube-apiserver-ingress-addon-legacy-223875" in "kube-system" namespace to be "Ready" ...
	I0130 19:41:12.407565   20626 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-223875" in "kube-system" namespace to be "Ready" ...
	I0130 19:41:12.411864   20626 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-223875" in "kube-system" namespace has status "Ready":"True"
	I0130 19:41:12.411880   20626 pod_ready.go:81] duration metric: took 4.309272ms waiting for pod "kube-controller-manager-ingress-addon-legacy-223875" in "kube-system" namespace to be "Ready" ...
	I0130 19:41:12.411890   20626 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pj44m" in "kube-system" namespace to be "Ready" ...
	I0130 19:41:12.415597   20626 pod_ready.go:92] pod "kube-proxy-pj44m" in "kube-system" namespace has status "Ready":"True"
	I0130 19:41:12.415612   20626 pod_ready.go:81] duration metric: took 3.716514ms waiting for pod "kube-proxy-pj44m" in "kube-system" namespace to be "Ready" ...
	I0130 19:41:12.415619   20626 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-223875" in "kube-system" namespace to be "Ready" ...
	I0130 19:41:12.590935   20626 request.go:629] Waited for 175.251077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-223875
	I0130 19:41:12.791208   20626 request.go:629] Waited for 197.36846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ingress-addon-legacy-223875
	I0130 19:41:12.795160   20626 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-223875" in "kube-system" namespace has status "Ready":"True"
	I0130 19:41:12.795179   20626 pod_ready.go:81] duration metric: took 379.553846ms waiting for pod "kube-scheduler-ingress-addon-legacy-223875" in "kube-system" namespace to be "Ready" ...
	I0130 19:41:12.795189   20626 pod_ready.go:38] duration metric: took 39.917119822s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 19:41:12.795206   20626 api_server.go:52] waiting for apiserver process to appear ...
	I0130 19:41:12.795258   20626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 19:41:12.806780   20626 api_server.go:72] duration metric: took 40.584156351s to wait for apiserver process to appear ...
	I0130 19:41:12.806806   20626 api_server.go:88] waiting for apiserver healthz status ...
	I0130 19:41:12.806820   20626 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I0130 19:41:12.813449   20626 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I0130 19:41:12.814348   20626 api_server.go:141] control plane version: v1.18.20
	I0130 19:41:12.814368   20626 api_server.go:131] duration metric: took 7.557109ms to wait for apiserver health ...
	I0130 19:41:12.814376   20626 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 19:41:12.991777   20626 request.go:629] Waited for 177.328162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0130 19:41:12.997075   20626 system_pods.go:59] 7 kube-system pods found
	I0130 19:41:12.997105   20626 system_pods.go:61] "coredns-66bff467f8-5cpks" [1c158a24-272f-4649-8b32-8f98b1edcf80] Running
	I0130 19:41:12.997117   20626 system_pods.go:61] "etcd-ingress-addon-legacy-223875" [a44dfe8c-2ed4-4350-9e28-7737951dc21f] Running
	I0130 19:41:12.997128   20626 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-223875" [179767b1-d292-4737-a6f8-d1efe54a0445] Running
	I0130 19:41:12.997135   20626 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-223875" [1ecc1c9e-2092-40d4-9fd3-0b30f7921da7] Running
	I0130 19:41:12.997144   20626 system_pods.go:61] "kube-proxy-pj44m" [39409723-7f8f-4679-9a02-70afd28dcdfe] Running
	I0130 19:41:12.997154   20626 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-223875" [d9f1e4bc-3388-43da-8385-e1f4ba2f24b1] Running
	I0130 19:41:12.997162   20626 system_pods.go:61] "storage-provisioner" [5f9f1fc0-44ac-41b2-b7b0-f31339532533] Running
	I0130 19:41:12.997168   20626 system_pods.go:74] duration metric: took 182.786484ms to wait for pod list to return data ...
	I0130 19:41:12.997177   20626 default_sa.go:34] waiting for default service account to be created ...
	I0130 19:41:13.191612   20626 request.go:629] Waited for 194.37828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I0130 19:41:13.194558   20626 default_sa.go:45] found service account: "default"
	I0130 19:41:13.194577   20626 default_sa.go:55] duration metric: took 197.39329ms for default service account to be created ...
	I0130 19:41:13.194585   20626 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 19:41:13.391843   20626 request.go:629] Waited for 197.199572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0130 19:41:13.397785   20626 system_pods.go:86] 7 kube-system pods found
	I0130 19:41:13.397810   20626 system_pods.go:89] "coredns-66bff467f8-5cpks" [1c158a24-272f-4649-8b32-8f98b1edcf80] Running
	I0130 19:41:13.397816   20626 system_pods.go:89] "etcd-ingress-addon-legacy-223875" [a44dfe8c-2ed4-4350-9e28-7737951dc21f] Running
	I0130 19:41:13.397820   20626 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-223875" [179767b1-d292-4737-a6f8-d1efe54a0445] Running
	I0130 19:41:13.397825   20626 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-223875" [1ecc1c9e-2092-40d4-9fd3-0b30f7921da7] Running
	I0130 19:41:13.397831   20626 system_pods.go:89] "kube-proxy-pj44m" [39409723-7f8f-4679-9a02-70afd28dcdfe] Running
	I0130 19:41:13.397835   20626 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-223875" [d9f1e4bc-3388-43da-8385-e1f4ba2f24b1] Running
	I0130 19:41:13.397842   20626 system_pods.go:89] "storage-provisioner" [5f9f1fc0-44ac-41b2-b7b0-f31339532533] Running
	I0130 19:41:13.397848   20626 system_pods.go:126] duration metric: took 203.259064ms to wait for k8s-apps to be running ...
	I0130 19:41:13.397861   20626 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 19:41:13.397910   20626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 19:41:13.411188   20626 system_svc.go:56] duration metric: took 13.321795ms WaitForService to wait for kubelet.
	I0130 19:41:13.411209   20626 kubeadm.go:581] duration metric: took 41.188590213s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 19:41:13.411225   20626 node_conditions.go:102] verifying NodePressure condition ...
	I0130 19:41:13.591652   20626 request.go:629] Waited for 180.332696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes
	I0130 19:41:13.599200   20626 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 19:41:13.599232   20626 node_conditions.go:123] node cpu capacity is 2
	I0130 19:41:13.599243   20626 node_conditions.go:105] duration metric: took 188.013049ms to run NodePressure ...
	I0130 19:41:13.599256   20626 start.go:228] waiting for startup goroutines ...
	I0130 19:41:13.599282   20626 start.go:233] waiting for cluster config update ...
	I0130 19:41:13.599296   20626 start.go:242] writing updated cluster config ...
	I0130 19:41:13.599594   20626 ssh_runner.go:195] Run: rm -f paused
	I0130 19:41:13.644622   20626 start.go:600] kubectl: 1.29.1, cluster: 1.18.20 (minor skew: 11)
	I0130 19:41:13.646488   20626 out.go:177] 
	W0130 19:41:13.647846   20626 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.18.20.
	I0130 19:41:13.649448   20626 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0130 19:41:13.650921   20626 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-223875" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 19:39:41 UTC, ends at Tue 2024-01-30 19:44:25 UTC. --
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.074962761Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706643865074945820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203420,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=30d58859-51f7-46eb-abaf-e9876a1570e9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.075692623Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f08dd636-471a-44de-821e-d4c61d0ecb68 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.075766148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f08dd636-471a-44de-821e-d4c61d0ecb68 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.076053262Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddd945e79755a4dc4da089fd04530777bbaa7c754d8e79d6cce4a2d7b6b95b6a,PodSandboxId:19d4fb592a8b5989d35ccb9d483fc597787f492ac46156e694c043241c7ea273,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706643856305171739,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-qlpw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b14b86a7-be24-49c6-8826-5a2188deb0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 48433404,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43f78f51cd3bfe64caa96389acb0fb87e0ef0415e47aa0a839d0dd11ca5d9d5,PodSandboxId:33d57d120cfbf5bbf3c58330ce98a45e51702e9261a3d0db4f44238bf2eceb62,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,State:CONTAINER_RUNNING,CreatedAt:1706643713082283279,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ff00c7-92d9-4039-a43b-c50db0872e7f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: a30d5b00,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbf1d57d3ef30fc0f1e69c37f9b2f046b1f3ded159f69481583013038221e41b,PodSandboxId:e3ed12d0c1028f5e9023e3792b8a9d63e0ee30caba0186b12494a8eefd172004,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1706643689813614915,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-gv7l6,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 527c6034-66bd-4f6c-982a-f2ad5c1ce3bf,},Annotations:map[string]string{io.kubernetes.container.hash: dade7560,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d7513aa73dcf029e455727667b7794eea62fe3620ea0bd6af3cc813f46dd4fce,PodSandboxId:dc955eeffde29a57f231159d3f921e99dd32611324153a46bf896c137a2c432e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706643680425007671,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z4w9z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1c28fcff-73e5-48a1-9b64-08f68cb2a8e5,},Annotations:map[string]string{io.kubernetes.container.hash: 4947a66b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f068e3b4fb0cf3192cc6f2a6174f55c84be4f222c10b66f3ef831c0ba55dc95,PodSandboxId:551a8f4a1204c25bf26d02d8eccead2796dc648114f8957790f77c1f8b0098d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706643679269771090,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qf7bs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8d7653b9-f62d-4bad-95b9-4d8361173870,},Annotations:map[string]string{io.kubernetes.container.hash: ffe904cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c1e4b16c95a7016aef014c7f841fad2dc0ebac475784729ca27c60cb72c332,PodSandboxId:87c98cfdaebd15bc824bbbd458fba289bc94fa27189f1a88d17d467f6672a483,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706643633626477913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9f1fc0-44ac-41b2-b7b0-f31339532533,},Annotations:map[string]string{io.kubernetes.container.hash: 782e3c12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a755b90b0f72a7cd51cc8f902564b7f77d2d8a95bb8f5e7458c580473318874,PodSandboxId:effc5b615ab3eb7e024c1d3bc1b165286ee6c1d9a1ca8192fcda4f97a69f0c3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1706643633151644489,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj44m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39409723-7f8f-4679-9a02-70afd28dcdfe,},Annotations:map[string]string{io.kubernetes.container.hash: 1efa0c75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1502b6344bb6b2ce5fd63f005aae764205429672423dfae317ded6058f039fa2,PodSandboxId:94aec84cdcd3ad3b3a846ba71e9c2828a7c6ebe8f02c9a94d4c25c7bd5f13a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1706643632266816635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-5cpks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c158a24-272f-4649-8b32-8f98b1edcf80,},Annotations:map[string]string{io.kubernetes.container.hash: 3e76391d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379f8d64fe0622d0dfedf05de05ee7c1dde9b37bac343d1276d88987498f1a4d,Pod
SandboxId:87a9e067d5ae0e68cffc63b0455615f0d9cfb549f1d346979b0b7c3418e49d18,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1706643609722017455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-223875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfd85bc463422d39b8d2423429573e2,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9b5af0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161fe7735ee397f4455df712afc46d0bcedc6e1ee8b719cdc637bb1b3d12750a,PodSandboxId:38bc1185c4660f20d53a8d05a0f706d0aa9f
4766c71c8938ce45e537d36ec7a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1706643608344948562,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-223875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04d960bbd5cd96f96d444a3b478b95a45f729c4edf8982e2991ab6c556feded2,PodSandboxId:dfba443c0e9dfad4584b350ca0a184fd0bf87843f5
7b8f5d06ff72a00470737b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1706643608130488092,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-223875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ca381e24fec1cee6c68bfdee920ae8,},Annotations:map[string]string{io.kubernetes.container.hash: 99ee2425,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41b0194cf623aa78083e17e6d62527767e9e51723f3923471fe1f9712a9974f9,PodSandboxId:4c890f47e34a8011146860b22f5d4d226256ff73b7e1381c
5cb5fe78daf69578,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1706643607963184641,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-223875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f08dd636-471a-44de-821e-d4c61d0ecb68 name=/runtime.v1.RuntimeSer
vice/ListContainers
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.113211979Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=060d2bdd-7a16-47fb-977f-ff1f1bf3b106 name=/runtime.v1.RuntimeService/Version
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.113319889Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=060d2bdd-7a16-47fb-977f-ff1f1bf3b106 name=/runtime.v1.RuntimeService/Version
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.115167435Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=11dd684e-3090-4ffe-8538-1ba2cde74b49 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.115727354Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706643865115714439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203420,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=11dd684e-3090-4ffe-8538-1ba2cde74b49 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.116112667Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=40b50048-7e84-4a92-8acb-07c6bb5891db name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.116157409Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=40b50048-7e84-4a92-8acb-07c6bb5891db name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.116463751Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddd945e79755a4dc4da089fd04530777bbaa7c754d8e79d6cce4a2d7b6b95b6a,PodSandboxId:19d4fb592a8b5989d35ccb9d483fc597787f492ac46156e694c043241c7ea273,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706643856305171739,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-qlpw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b14b86a7-be24-49c6-8826-5a2188deb0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 48433404,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43f78f51cd3bfe64caa96389acb0fb87e0ef0415e47aa0a839d0dd11ca5d9d5,PodSandboxId:33d57d120cfbf5bbf3c58330ce98a45e51702e9261a3d0db4f44238bf2eceb62,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,State:CONTAINER_RUNNING,CreatedAt:1706643713082283279,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ff00c7-92d9-4039-a43b-c50db0872e7f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: a30d5b00,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbf1d57d3ef30fc0f1e69c37f9b2f046b1f3ded159f69481583013038221e41b,PodSandboxId:e3ed12d0c1028f5e9023e3792b8a9d63e0ee30caba0186b12494a8eefd172004,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1706643689813614915,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-gv7l6,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 527c6034-66bd-4f6c-982a-f2ad5c1ce3bf,},Annotations:map[string]string{io.kubernetes.container.hash: dade7560,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d7513aa73dcf029e455727667b7794eea62fe3620ea0bd6af3cc813f46dd4fce,PodSandboxId:dc955eeffde29a57f231159d3f921e99dd32611324153a46bf896c137a2c432e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706643680425007671,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z4w9z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1c28fcff-73e5-48a1-9b64-08f68cb2a8e5,},Annotations:map[string]string{io.kubernetes.container.hash: 4947a66b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f068e3b4fb0cf3192cc6f2a6174f55c84be4f222c10b66f3ef831c0ba55dc95,PodSandboxId:551a8f4a1204c25bf26d02d8eccead2796dc648114f8957790f77c1f8b0098d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706643679269771090,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qf7bs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8d7653b9-f62d-4bad-95b9-4d8361173870,},Annotations:map[string]string{io.kubernetes.container.hash: ffe904cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c1e4b16c95a7016aef014c7f841fad2dc0ebac475784729ca27c60cb72c332,PodSandboxId:87c98cfdaebd15bc824bbbd458fba289bc94fa27189f1a88d17d467f6672a483,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706643633626477913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9f1fc0-44ac-41b2-b7b0-f31339532533,},Annotations:map[string]string{io.kubernetes.container.hash: 782e3c12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a755b90b0f72a7cd51cc8f902564b7f77d2d8a95bb8f5e7458c580473318874,PodSandboxId:effc5b615ab3eb7e024c1d3bc1b165286ee6c1d9a1ca8192fcda4f97a69f0c3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1706643633151644489,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj44m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39409723-7f8f-4679-9a02-70afd28dcdfe,},Annotations:map[string]string{io.kubernetes.container.hash: 1efa0c75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1502b6344bb6b2ce5fd63f005aae764205429672423dfae317ded6058f039fa2,PodSandboxId:94aec84cdcd3ad3b3a846ba71e9c2828a7c6ebe8f02c9a94d4c25c7bd5f13a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1706643632266816635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-5cpks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c158a24-272f-4649-8b32-8f98b1edcf80,},Annotations:map[string]string{io.kubernetes.container.hash: 3e76391d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379f8d64fe0622d0dfedf05de05ee7c1dde9b37bac343d1276d88987498f1a4d,Pod
SandboxId:87a9e067d5ae0e68cffc63b0455615f0d9cfb549f1d346979b0b7c3418e49d18,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1706643609722017455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-223875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfd85bc463422d39b8d2423429573e2,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9b5af0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161fe7735ee397f4455df712afc46d0bcedc6e1ee8b719cdc637bb1b3d12750a,PodSandboxId:38bc1185c4660f20d53a8d05a0f706d0aa9f
4766c71c8938ce45e537d36ec7a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1706643608344948562,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-223875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04d960bbd5cd96f96d444a3b478b95a45f729c4edf8982e2991ab6c556feded2,PodSandboxId:dfba443c0e9dfad4584b350ca0a184fd0bf87843f5
7b8f5d06ff72a00470737b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1706643608130488092,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-223875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ca381e24fec1cee6c68bfdee920ae8,},Annotations:map[string]string{io.kubernetes.container.hash: 99ee2425,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41b0194cf623aa78083e17e6d62527767e9e51723f3923471fe1f9712a9974f9,PodSandboxId:4c890f47e34a8011146860b22f5d4d226256ff73b7e1381c
5cb5fe78daf69578,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1706643607963184641,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-223875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=40b50048-7e84-4a92-8acb-07c6bb5891db name=/runtime.v1.RuntimeSer
vice/ListContainers
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.155258270Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=80d2a192-7cca-451d-b6b7-deacf0fa53a9 name=/runtime.v1.RuntimeService/Version
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.155309494Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=80d2a192-7cca-451d-b6b7-deacf0fa53a9 name=/runtime.v1.RuntimeService/Version
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.157681451Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=402b397b-1000-4e10-85af-50ff18fdfd69 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.158124098Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706643865158110328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203420,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=402b397b-1000-4e10-85af-50ff18fdfd69 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.158854703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7c57fb4d-86ec-42e9-b5c7-879482ab3e3a name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.158907472Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7c57fb4d-86ec-42e9-b5c7-879482ab3e3a name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.159131756Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddd945e79755a4dc4da089fd04530777bbaa7c754d8e79d6cce4a2d7b6b95b6a,PodSandboxId:19d4fb592a8b5989d35ccb9d483fc597787f492ac46156e694c043241c7ea273,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706643856305171739,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-qlpw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b14b86a7-be24-49c6-8826-5a2188deb0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 48433404,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43f78f51cd3bfe64caa96389acb0fb87e0ef0415e47aa0a839d0dd11ca5d9d5,PodSandboxId:33d57d120cfbf5bbf3c58330ce98a45e51702e9261a3d0db4f44238bf2eceb62,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,State:CONTAINER_RUNNING,CreatedAt:1706643713082283279,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ff00c7-92d9-4039-a43b-c50db0872e7f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: a30d5b00,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbf1d57d3ef30fc0f1e69c37f9b2f046b1f3ded159f69481583013038221e41b,PodSandboxId:e3ed12d0c1028f5e9023e3792b8a9d63e0ee30caba0186b12494a8eefd172004,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1706643689813614915,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-gv7l6,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 527c6034-66bd-4f6c-982a-f2ad5c1ce3bf,},Annotations:map[string]string{io.kubernetes.container.hash: dade7560,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d7513aa73dcf029e455727667b7794eea62fe3620ea0bd6af3cc813f46dd4fce,PodSandboxId:dc955eeffde29a57f231159d3f921e99dd32611324153a46bf896c137a2c432e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706643680425007671,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z4w9z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1c28fcff-73e5-48a1-9b64-08f68cb2a8e5,},Annotations:map[string]string{io.kubernetes.container.hash: 4947a66b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f068e3b4fb0cf3192cc6f2a6174f55c84be4f222c10b66f3ef831c0ba55dc95,PodSandboxId:551a8f4a1204c25bf26d02d8eccead2796dc648114f8957790f77c1f8b0098d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706643679269771090,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qf7bs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8d7653b9-f62d-4bad-95b9-4d8361173870,},Annotations:map[string]string{io.kubernetes.container.hash: ffe904cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c1e4b16c95a7016aef014c7f841fad2dc0ebac475784729ca27c60cb72c332,PodSandboxId:87c98cfdaebd15bc824bbbd458fba289bc94fa27189f1a88d17d467f6672a483,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706643633626477913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9f1fc0-44ac-41b2-b7b0-f31339532533,},Annotations:map[string]string{io.kubernetes.container.hash: 782e3c12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a755b90b0f72a7cd51cc8f902564b7f77d2d8a95bb8f5e7458c580473318874,PodSandboxId:effc5b615ab3eb7e024c1d3bc1b165286ee6c1d9a1ca8192fcda4f97a69f0c3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1706643633151644489,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj44m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39409723-7f8f-4679-9a02-70afd28dcdfe,},Annotations:map[string]string{io.kubernetes.container.hash: 1efa0c75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1502b6344bb6b2ce5fd63f005aae764205429672423dfae317ded6058f039fa2,PodSandboxId:94aec84cdcd3ad3b3a846ba71e9c2828a7c6ebe8f02c9a94d4c25c7bd5f13a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1706643632266816635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-5cpks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c158a24-272f-4649-8b32-8f98b1edcf80,},Annotations:map[string]string{io.kubernetes.container.hash: 3e76391d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379f8d64fe0622d0dfedf05de05ee7c1dde9b37bac343d1276d88987498f1a4d,Pod
SandboxId:87a9e067d5ae0e68cffc63b0455615f0d9cfb549f1d346979b0b7c3418e49d18,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1706643609722017455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-223875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfd85bc463422d39b8d2423429573e2,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9b5af0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161fe7735ee397f4455df712afc46d0bcedc6e1ee8b719cdc637bb1b3d12750a,PodSandboxId:38bc1185c4660f20d53a8d05a0f706d0aa9f
4766c71c8938ce45e537d36ec7a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1706643608344948562,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-223875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04d960bbd5cd96f96d444a3b478b95a45f729c4edf8982e2991ab6c556feded2,PodSandboxId:dfba443c0e9dfad4584b350ca0a184fd0bf87843f5
7b8f5d06ff72a00470737b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1706643608130488092,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-223875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ca381e24fec1cee6c68bfdee920ae8,},Annotations:map[string]string{io.kubernetes.container.hash: 99ee2425,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41b0194cf623aa78083e17e6d62527767e9e51723f3923471fe1f9712a9974f9,PodSandboxId:4c890f47e34a8011146860b22f5d4d226256ff73b7e1381c
5cb5fe78daf69578,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1706643607963184641,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-223875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7c57fb4d-86ec-42e9-b5c7-879482ab3e3a name=/runtime.v1.RuntimeSer
vice/ListContainers
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.192193103Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1ff2ce41-3911-47e3-b120-1636a7003909 name=/runtime.v1.RuntimeService/Version
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.192240080Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1ff2ce41-3911-47e3-b120-1636a7003909 name=/runtime.v1.RuntimeService/Version
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.193545899Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cf701773-45a8-41e4-a616-a7d69ffa729a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.194046314Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706643865194032496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203420,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=cf701773-45a8-41e4-a616-a7d69ffa729a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.194522100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d8a86b93-0112-400a-a566-46ef508c9305 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.194563920Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d8a86b93-0112-400a-a566-46ef508c9305 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 19:44:25 ingress-addon-legacy-223875 crio[712]: time="2024-01-30 19:44:25.194829127Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddd945e79755a4dc4da089fd04530777bbaa7c754d8e79d6cce4a2d7b6b95b6a,PodSandboxId:19d4fb592a8b5989d35ccb9d483fc597787f492ac46156e694c043241c7ea273,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706643856305171739,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-qlpw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b14b86a7-be24-49c6-8826-5a2188deb0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 48433404,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43f78f51cd3bfe64caa96389acb0fb87e0ef0415e47aa0a839d0dd11ca5d9d5,PodSandboxId:33d57d120cfbf5bbf3c58330ce98a45e51702e9261a3d0db4f44238bf2eceb62,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,State:CONTAINER_RUNNING,CreatedAt:1706643713082283279,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ff00c7-92d9-4039-a43b-c50db0872e7f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: a30d5b00,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbf1d57d3ef30fc0f1e69c37f9b2f046b1f3ded159f69481583013038221e41b,PodSandboxId:e3ed12d0c1028f5e9023e3792b8a9d63e0ee30caba0186b12494a8eefd172004,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1706643689813614915,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-gv7l6,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 527c6034-66bd-4f6c-982a-f2ad5c1ce3bf,},Annotations:map[string]string{io.kubernetes.container.hash: dade7560,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d7513aa73dcf029e455727667b7794eea62fe3620ea0bd6af3cc813f46dd4fce,PodSandboxId:dc955eeffde29a57f231159d3f921e99dd32611324153a46bf896c137a2c432e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706643680425007671,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z4w9z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1c28fcff-73e5-48a1-9b64-08f68cb2a8e5,},Annotations:map[string]string{io.kubernetes.container.hash: 4947a66b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f068e3b4fb0cf3192cc6f2a6174f55c84be4f222c10b66f3ef831c0ba55dc95,PodSandboxId:551a8f4a1204c25bf26d02d8eccead2796dc648114f8957790f77c1f8b0098d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706643679269771090,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qf7bs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8d7653b9-f62d-4bad-95b9-4d8361173870,},Annotations:map[string]string{io.kubernetes.container.hash: ffe904cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c1e4b16c95a7016aef014c7f841fad2dc0ebac475784729ca27c60cb72c332,PodSandboxId:87c98cfdaebd15bc824bbbd458fba289bc94fa27189f1a88d17d467f6672a483,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706643633626477913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9f1fc0-44ac-41b2-b7b0-f31339532533,},Annotations:map[string]string{io.kubernetes.container.hash: 782e3c12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a755b90b0f72a7cd51cc8f902564b7f77d2d8a95bb8f5e7458c580473318874,PodSandboxId:effc5b615ab3eb7e024c1d3bc1b165286ee6c1d9a1ca8192fcda4f97a69f0c3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1706643633151644489,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj44m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39409723-7f8f-4679-9a02-70afd28dcdfe,},Annotations:map[string]string{io.kubernetes.container.hash: 1efa0c75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1502b6344bb6b2ce5fd63f005aae764205429672423dfae317ded6058f039fa2,PodSandboxId:94aec84cdcd3ad3b3a846ba71e9c2828a7c6ebe8f02c9a94d4c25c7bd5f13a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1706643632266816635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-5cpks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c158a24-272f-4649-8b32-8f98b1edcf80,},Annotations:map[string]string{io.kubernetes.container.hash: 3e76391d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379f8d64fe0622d0dfedf05de05ee7c1dde9b37bac343d1276d88987498f1a4d,Pod
SandboxId:87a9e067d5ae0e68cffc63b0455615f0d9cfb549f1d346979b0b7c3418e49d18,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1706643609722017455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-223875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfd85bc463422d39b8d2423429573e2,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9b5af0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161fe7735ee397f4455df712afc46d0bcedc6e1ee8b719cdc637bb1b3d12750a,PodSandboxId:38bc1185c4660f20d53a8d05a0f706d0aa9f
4766c71c8938ce45e537d36ec7a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1706643608344948562,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-223875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04d960bbd5cd96f96d444a3b478b95a45f729c4edf8982e2991ab6c556feded2,PodSandboxId:dfba443c0e9dfad4584b350ca0a184fd0bf87843f5
7b8f5d06ff72a00470737b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1706643608130488092,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-223875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ca381e24fec1cee6c68bfdee920ae8,},Annotations:map[string]string{io.kubernetes.container.hash: 99ee2425,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41b0194cf623aa78083e17e6d62527767e9e51723f3923471fe1f9712a9974f9,PodSandboxId:4c890f47e34a8011146860b22f5d4d226256ff73b7e1381c
5cb5fe78daf69578,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1706643607963184641,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-223875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d8a86b93-0112-400a-a566-46ef508c9305 name=/runtime.v1.RuntimeSer
vice/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ddd945e79755a       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            8 seconds ago       Running             hello-world-app           0                   19d4fb592a8b5       hello-world-app-5f5d8b66bb-qlpw6
	b43f78f51cd3b       docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25                    2 minutes ago       Running             nginx                     0                   33d57d120cfbf       nginx
	bbf1d57d3ef30       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   e3ed12d0c1028       ingress-nginx-controller-7fcf777cb7-gv7l6
	d7513aa73dcf0       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   dc955eeffde29       ingress-nginx-admission-patch-z4w9z
	9f068e3b4fb0c       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   551a8f4a1204c       ingress-nginx-admission-create-qf7bs
	56c1e4b16c95a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   87c98cfdaebd1       storage-provisioner
	4a755b90b0f72       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   effc5b615ab3e       kube-proxy-pj44m
	1502b6344bb6b       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   94aec84cdcd3a       coredns-66bff467f8-5cpks
	379f8d64fe062       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   87a9e067d5ae0       etcd-ingress-addon-legacy-223875
	161fe7735ee39       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   38bc1185c4660       kube-scheduler-ingress-addon-legacy-223875
	04d960bbd5cd9       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   dfba443c0e9df       kube-apiserver-ingress-addon-legacy-223875
	41b0194cf623a       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   4c890f47e34a8       kube-controller-manager-ingress-addon-legacy-223875
	
	
	==> coredns [1502b6344bb6b2ce5fd63f005aae764205429672423dfae317ded6058f039fa2] <==
	[INFO] 10.244.0.5:40144 - 16088 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000031314s
	[INFO] 10.244.0.5:48906 - 18252 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000070551s
	[INFO] 10.244.0.5:40144 - 50171 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031604s
	[INFO] 10.244.0.5:48906 - 57992 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000075023s
	[INFO] 10.244.0.5:40144 - 41016 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000028262s
	[INFO] 10.244.0.5:48906 - 52363 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061035s
	[INFO] 10.244.0.5:40144 - 22307 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029366s
	[INFO] 10.244.0.5:48906 - 32726 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057773s
	[INFO] 10.244.0.5:48906 - 9678 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000131779s
	[INFO] 10.244.0.5:40144 - 46849 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000240543s
	[INFO] 10.244.0.5:40144 - 5346 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000041831s
	[INFO] 10.244.0.5:49861 - 32089 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000087539s
	[INFO] 10.244.0.5:44699 - 49511 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000063463s
	[INFO] 10.244.0.5:49861 - 26219 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000048862s
	[INFO] 10.244.0.5:49861 - 13650 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000026542s
	[INFO] 10.244.0.5:49861 - 33201 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00008471s
	[INFO] 10.244.0.5:44699 - 23320 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000054162s
	[INFO] 10.244.0.5:49861 - 4138 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029833s
	[INFO] 10.244.0.5:44699 - 1324 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050377s
	[INFO] 10.244.0.5:49861 - 54471 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027541s
	[INFO] 10.244.0.5:44699 - 43562 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050824s
	[INFO] 10.244.0.5:49861 - 11972 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000030842s
	[INFO] 10.244.0.5:44699 - 64541 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000051217s
	[INFO] 10.244.0.5:44699 - 590 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000133951s
	[INFO] 10.244.0.5:44699 - 18247 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000066269s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-223875
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-223875
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218
	                    minikube.k8s.io/name=ingress-addon-legacy-223875
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T19_40_16_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 19:40:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-223875
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 19:44:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 19:44:16 +0000   Tue, 30 Jan 2024 19:40:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 19:44:16 +0000   Tue, 30 Jan 2024 19:40:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 19:44:16 +0000   Tue, 30 Jan 2024 19:40:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 19:44:16 +0000   Tue, 30 Jan 2024 19:40:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    ingress-addon-legacy-223875
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 ebe7aa24d5f148039da0453725741377
	  System UUID:                ebe7aa24-d5f1-4803-9da0-453725741377
	  Boot ID:                    c8858f1e-0ec8-4d73-afd8-24f49a6d0575
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-qlpw6                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 coredns-66bff467f8-5cpks                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m54s
	  kube-system                 etcd-ingress-addon-legacy-223875                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-apiserver-ingress-addon-legacy-223875             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-223875    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-proxy-pj44m                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-scheduler-ingress-addon-legacy-223875             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m9s   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s   kubelet     Node ingress-addon-legacy-223875 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s   kubelet     Node ingress-addon-legacy-223875 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s   kubelet     Node ingress-addon-legacy-223875 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m9s   kubelet     Node ingress-addon-legacy-223875 status is now: NodeReady
	  Normal  Starting                 3m52s  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan30 19:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.091827] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.420379] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.455445] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147676] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.948070] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.589586] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.116305] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.146077] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.115530] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.227121] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[Jan30 19:40] systemd-fstab-generator[1026]: Ignoring "noauto" for root device
	[  +2.896689] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.636834] systemd-fstab-generator[1419]: Ignoring "noauto" for root device
	[ +15.784344] kauditd_printk_skb: 6 callbacks suppressed
	[Jan30 19:41] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.935011] kauditd_printk_skb: 6 callbacks suppressed
	[ +27.299748] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.994911] kauditd_printk_skb: 3 callbacks suppressed
	[Jan30 19:44] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [379f8d64fe0622d0dfedf05de05ee7c1dde9b37bac343d1276d88987498f1a4d] <==
	raft2024/01/30 19:40:09 INFO: 900c4b71f7b778f3 became follower at term 1
	raft2024/01/30 19:40:09 INFO: 900c4b71f7b778f3 switched to configuration voters=(10379754194041534707)
	2024-01-30 19:40:09.841710 W | auth: simple token is not cryptographically signed
	2024-01-30 19:40:09.845526 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2024/01/30 19:40:09 INFO: 900c4b71f7b778f3 switched to configuration voters=(10379754194041534707)
	2024-01-30 19:40:09.847532 I | etcdserver: 900c4b71f7b778f3 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2024-01-30 19:40:09.847698 I | etcdserver/membership: added member 900c4b71f7b778f3 [https://192.168.39.152:2380] to cluster ce072c4559d5992c
	2024-01-30 19:40:09.848563 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-30 19:40:09.848651 I | embed: listening for peers on 192.168.39.152:2380
	2024-01-30 19:40:09.848713 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/01/30 19:40:10 INFO: 900c4b71f7b778f3 is starting a new election at term 1
	raft2024/01/30 19:40:10 INFO: 900c4b71f7b778f3 became candidate at term 2
	raft2024/01/30 19:40:10 INFO: 900c4b71f7b778f3 received MsgVoteResp from 900c4b71f7b778f3 at term 2
	raft2024/01/30 19:40:10 INFO: 900c4b71f7b778f3 became leader at term 2
	raft2024/01/30 19:40:10 INFO: raft.node: 900c4b71f7b778f3 elected leader 900c4b71f7b778f3 at term 2
	2024-01-30 19:40:10.335586 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-30 19:40:10.337353 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-30 19:40:10.337746 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-30 19:40:10.337802 I | etcdserver: published {Name:ingress-addon-legacy-223875 ClientURLs:[https://192.168.39.152:2379]} to cluster ce072c4559d5992c
	2024-01-30 19:40:10.337811 I | embed: ready to serve client requests
	2024-01-30 19:40:10.338044 I | embed: ready to serve client requests
	2024-01-30 19:40:10.339046 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-30 19:40:10.341051 I | embed: serving client requests on 192.168.39.152:2379
	2024-01-30 19:40:31.076609 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (386.73087ms) to execute
	2024-01-30 19:40:31.076722 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" " with result "range_response_count:0 size:5" took too long (505.320741ms) to execute
	
	
	==> kernel <==
	 19:44:25 up 4 min,  0 users,  load average: 0.84, 0.49, 0.22
	Linux ingress-addon-legacy-223875 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [04d960bbd5cd96f96d444a3b478b95a45f729c4edf8982e2991ab6c556feded2] <==
	I0130 19:40:13.301505       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0130 19:40:13.301574       1 cache.go:39] Caches are synced for autoregister controller
	I0130 19:40:13.301753       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0130 19:40:13.310124       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0130 19:40:13.315993       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0130 19:40:14.194969       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0130 19:40:14.195132       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0130 19:40:14.202647       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0130 19:40:14.208640       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0130 19:40:14.208683       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0130 19:40:14.674488       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0130 19:40:14.717525       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0130 19:40:14.857317       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.152]
	I0130 19:40:14.858164       1 controller.go:609] quota admission added evaluator for: endpoints
	I0130 19:40:14.861860       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0130 19:40:15.566758       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0130 19:40:16.259354       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0130 19:40:16.383303       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0130 19:40:16.705222       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0130 19:40:31.077127       1 trace.go:116] Trace[46720931]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/bootstrap-signer,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/kube-controller-manager,client:192.168.39.152 (started: 2024-01-30 19:40:30.570835672 +0000 UTC m=+22.195209581) (total time: 506.259966ms):
	Trace[46720931]: [506.259966ms] [506.251999ms] END
	I0130 19:40:31.375077       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0130 19:40:31.847761       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0130 19:41:14.429605       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0130 19:41:47.473034       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [41b0194cf623aa78083e17e6d62527767e9e51723f3923471fe1f9712a9974f9] <==
	I0130 19:40:31.488934       1 range_allocator.go:373] Set node ingress-addon-legacy-223875 PodCIDR to [10.244.0.0/24]
	I0130 19:40:31.572162       1 shared_informer.go:230] Caches are synced for disruption 
	I0130 19:40:31.572204       1 disruption.go:339] Sending events to api server.
	I0130 19:40:31.672931       1 shared_informer.go:230] Caches are synced for HPA 
	I0130 19:40:31.742143       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"e1cb76d9-3d86-4c7d-bd88-73cfa625c67b", APIVersion:"apps/v1", ResourceVersion:"343", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0130 19:40:31.747497       1 shared_informer.go:230] Caches are synced for stateful set 
	I0130 19:40:31.774960       1 shared_informer.go:230] Caches are synced for resource quota 
	I0130 19:40:31.797755       1 shared_informer.go:230] Caches are synced for resource quota 
	I0130 19:40:31.797982       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0130 19:40:31.798034       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0130 19:40:31.822757       1 shared_informer.go:230] Caches are synced for daemon sets 
	I0130 19:40:31.850565       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6ac68893-98d0-4836-ad4e-ddda373ad7e0", APIVersion:"apps/v1", ResourceVersion:"344", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-2pt4c
	I0130 19:40:31.873652       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0130 19:40:31.914370       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"305d2449-5c60-4c82-a87e-1ea54ed2f58a", APIVersion:"apps/v1", ResourceVersion:"209", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-pj44m
	E0130 19:40:32.173871       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"305d2449-5c60-4c82-a87e-1ea54ed2f58a", ResourceVersion:"209", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63842240416, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001bba560), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0xc001bba580)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001bba5a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001b96d40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0xc001bba5c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001bba5e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001bba620)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001af3f40), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001bb6358), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00010ccb0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0008de6d0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001bb63a8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0130 19:41:14.417924       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"02381d4e-74e1-46f9-8dd7-335ba18afb8c", APIVersion:"apps/v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0130 19:41:14.450783       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"bc1e6bae-7383-47cd-a16f-70712cde25ad", APIVersion:"apps/v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-gv7l6
	I0130 19:41:14.489593       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"fecee4a7-180f-4f2e-883b-26b60038589b", APIVersion:"batch/v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-qf7bs
	I0130 19:41:14.497906       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"c156de43-189e-43ab-9306-8a5feb92ae56", APIVersion:"batch/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-z4w9z
	I0130 19:41:19.964309       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"fecee4a7-180f-4f2e-883b-26b60038589b", APIVersion:"batch/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0130 19:41:20.970530       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"c156de43-189e-43ab-9306-8a5feb92ae56", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0130 19:44:12.343113       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"768756c6-3140-40fa-a9e1-5496e3258e2f", APIVersion:"apps/v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0130 19:44:12.360808       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"afcd343e-d965-4aa8-99b0-892de242e089", APIVersion:"apps/v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-qlpw6
	E0130 19:44:22.469064       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-48p7z" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [4a755b90b0f72a7cd51cc8f902564b7f77d2d8a95bb8f5e7458c580473318874] <==
	W0130 19:40:33.388265       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0130 19:40:33.400585       1 node.go:136] Successfully retrieved node IP: 192.168.39.152
	I0130 19:40:33.400662       1 server_others.go:186] Using iptables Proxier.
	I0130 19:40:33.401941       1 server.go:583] Version: v1.18.20
	I0130 19:40:33.403570       1 config.go:315] Starting service config controller
	I0130 19:40:33.403613       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0130 19:40:33.403672       1 config.go:133] Starting endpoints config controller
	I0130 19:40:33.403719       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0130 19:40:33.504557       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0130 19:40:33.504684       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [161fe7735ee397f4455df712afc46d0bcedc6e1ee8b719cdc637bb1b3d12750a] <==
	I0130 19:40:13.312574       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0130 19:40:13.313155       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0130 19:40:13.313215       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0130 19:40:13.313231       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0130 19:40:13.318232       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0130 19:40:13.318357       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0130 19:40:13.325988       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0130 19:40:13.337709       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 19:40:13.337963       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0130 19:40:13.340746       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0130 19:40:13.340995       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0130 19:40:13.341217       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0130 19:40:13.343793       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 19:40:13.343893       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0130 19:40:13.343975       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0130 19:40:13.349102       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0130 19:40:14.178819       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0130 19:40:14.245593       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0130 19:40:14.331247       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 19:40:14.361555       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0130 19:40:14.394479       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0130 19:40:14.416919       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0130 19:40:16.113558       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0130 19:40:31.419010       1 factory.go:503] pod: kube-system/coredns-66bff467f8-2pt4c is already present in the active queue
	E0130 19:40:31.434978       1 factory.go:503] pod: kube-system/coredns-66bff467f8-5cpks is already present in the active queue
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 19:39:41 UTC, ends at Tue 2024-01-30 19:44:25 UTC. --
	Jan 30 19:41:22 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:41:22.089488    1426 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c28fcff-73e5-48a1-9b64-08f68cb2a8e5-ingress-nginx-admission-token-xf6hl" (OuterVolumeSpecName: "ingress-nginx-admission-token-xf6hl") pod "1c28fcff-73e5-48a1-9b64-08f68cb2a8e5" (UID: "1c28fcff-73e5-48a1-9b64-08f68cb2a8e5"). InnerVolumeSpecName "ingress-nginx-admission-token-xf6hl". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 30 19:41:22 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:41:22.185453    1426 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-xf6hl" (UniqueName: "kubernetes.io/secret/1c28fcff-73e5-48a1-9b64-08f68cb2a8e5-ingress-nginx-admission-token-xf6hl") on node "ingress-addon-legacy-223875" DevicePath ""
	Jan 30 19:41:31 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:41:31.165610    1426 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 30 19:41:31 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:41:31.315272    1426 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-zwrhm" (UniqueName: "kubernetes.io/secret/85eb8dda-9419-4844-a8e3-587ac3efba50-minikube-ingress-dns-token-zwrhm") pod "kube-ingress-dns-minikube" (UID: "85eb8dda-9419-4844-a8e3-587ac3efba50")
	Jan 30 19:41:47 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:41:47.680228    1426 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 30 19:41:47 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:41:47.866935    1426 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-svww2" (UniqueName: "kubernetes.io/secret/29ff00c7-92d9-4039-a43b-c50db0872e7f-default-token-svww2") pod "nginx" (UID: "29ff00c7-92d9-4039-a43b-c50db0872e7f")
	Jan 30 19:44:12 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:44:12.367611    1426 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 30 19:44:12 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:44:12.416335    1426 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-svww2" (UniqueName: "kubernetes.io/secret/b14b86a7-be24-49c6-8826-5a2188deb0ad-default-token-svww2") pod "hello-world-app-5f5d8b66bb-qlpw6" (UID: "b14b86a7-be24-49c6-8826-5a2188deb0ad")
	Jan 30 19:44:13 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:44:13.671248    1426 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5d161203f777fd2ba1705adb7c60febcbb676bd8b7ac01386607694db2b3f885
	Jan 30 19:44:13 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:44:13.702095    1426 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5d161203f777fd2ba1705adb7c60febcbb676bd8b7ac01386607694db2b3f885
	Jan 30 19:44:13 ingress-addon-legacy-223875 kubelet[1426]: E0130 19:44:13.703989    1426 remote_runtime.go:295] ContainerStatus "5d161203f777fd2ba1705adb7c60febcbb676bd8b7ac01386607694db2b3f885" from runtime service failed: rpc error: code = NotFound desc = could not find container "5d161203f777fd2ba1705adb7c60febcbb676bd8b7ac01386607694db2b3f885": container with ID starting with 5d161203f777fd2ba1705adb7c60febcbb676bd8b7ac01386607694db2b3f885 not found: ID does not exist
	Jan 30 19:44:13 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:44:13.722789    1426 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-zwrhm" (UniqueName: "kubernetes.io/secret/85eb8dda-9419-4844-a8e3-587ac3efba50-minikube-ingress-dns-token-zwrhm") pod "85eb8dda-9419-4844-a8e3-587ac3efba50" (UID: "85eb8dda-9419-4844-a8e3-587ac3efba50")
	Jan 30 19:44:13 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:44:13.728217    1426 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85eb8dda-9419-4844-a8e3-587ac3efba50-minikube-ingress-dns-token-zwrhm" (OuterVolumeSpecName: "minikube-ingress-dns-token-zwrhm") pod "85eb8dda-9419-4844-a8e3-587ac3efba50" (UID: "85eb8dda-9419-4844-a8e3-587ac3efba50"). InnerVolumeSpecName "minikube-ingress-dns-token-zwrhm". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 30 19:44:13 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:44:13.823213    1426 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-zwrhm" (UniqueName: "kubernetes.io/secret/85eb8dda-9419-4844-a8e3-587ac3efba50-minikube-ingress-dns-token-zwrhm") on node "ingress-addon-legacy-223875" DevicePath ""
	Jan 30 19:44:14 ingress-addon-legacy-223875 kubelet[1426]: E0130 19:44:14.774702    1426 kubelet_pods.go:1235] Failed killing the pod "kube-ingress-dns-minikube": failed to "KillContainer" for "minikube-ingress-dns" with KillContainerError: "rpc error: code = NotFound desc = could not find container \"5d161203f777fd2ba1705adb7c60febcbb676bd8b7ac01386607694db2b3f885\": container with ID starting with 5d161203f777fd2ba1705adb7c60febcbb676bd8b7ac01386607694db2b3f885 not found: ID does not exist"
	Jan 30 19:44:17 ingress-addon-legacy-223875 kubelet[1426]: E0130 19:44:17.703956    1426 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-gv7l6.17af378be05bf3a6", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-gv7l6", UID:"527c6034-66bd-4f6c-982a-f2ad5c1ce3bf", APIVersion:"v1", ResourceVersion:"472", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-223875"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc166724469c589a6, ext:241496140712, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc166724469c589a6, ext:241496140712, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-gv7l6.17af378be05bf3a6" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 30 19:44:17 ingress-addon-legacy-223875 kubelet[1426]: E0130 19:44:17.722175    1426 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-gv7l6.17af378be05bf3a6", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-gv7l6", UID:"527c6034-66bd-4f6c-982a-f2ad5c1ce3bf", APIVersion:"v1", ResourceVersion:"472", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-223875"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc166724469c589a6, ext:241496140712, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16672446ac7cde0, ext:241513066466, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-gv7l6.17af378be05bf3a6" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 30 19:44:20 ingress-addon-legacy-223875 kubelet[1426]: W0130 19:44:20.705980    1426 pod_container_deletor.go:77] Container "e3ed12d0c1028f5e9023e3792b8a9d63e0ee30caba0186b12494a8eefd172004" not found in pod's containers
	Jan 30 19:44:21 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:44:21.855047    1426 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-lcwzg" (UniqueName: "kubernetes.io/secret/527c6034-66bd-4f6c-982a-f2ad5c1ce3bf-ingress-nginx-token-lcwzg") pod "527c6034-66bd-4f6c-982a-f2ad5c1ce3bf" (UID: "527c6034-66bd-4f6c-982a-f2ad5c1ce3bf")
	Jan 30 19:44:21 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:44:21.855082    1426 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/527c6034-66bd-4f6c-982a-f2ad5c1ce3bf-webhook-cert") pod "527c6034-66bd-4f6c-982a-f2ad5c1ce3bf" (UID: "527c6034-66bd-4f6c-982a-f2ad5c1ce3bf")
	Jan 30 19:44:21 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:44:21.857136    1426 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/527c6034-66bd-4f6c-982a-f2ad5c1ce3bf-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "527c6034-66bd-4f6c-982a-f2ad5c1ce3bf" (UID: "527c6034-66bd-4f6c-982a-f2ad5c1ce3bf"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 30 19:44:21 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:44:21.859006    1426 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/527c6034-66bd-4f6c-982a-f2ad5c1ce3bf-ingress-nginx-token-lcwzg" (OuterVolumeSpecName: "ingress-nginx-token-lcwzg") pod "527c6034-66bd-4f6c-982a-f2ad5c1ce3bf" (UID: "527c6034-66bd-4f6c-982a-f2ad5c1ce3bf"). InnerVolumeSpecName "ingress-nginx-token-lcwzg". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 30 19:44:21 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:44:21.955441    1426 reconciler.go:319] Volume detached for volume "ingress-nginx-token-lcwzg" (UniqueName: "kubernetes.io/secret/527c6034-66bd-4f6c-982a-f2ad5c1ce3bf-ingress-nginx-token-lcwzg") on node "ingress-addon-legacy-223875" DevicePath ""
	Jan 30 19:44:21 ingress-addon-legacy-223875 kubelet[1426]: I0130 19:44:21.955472    1426 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/527c6034-66bd-4f6c-982a-f2ad5c1ce3bf-webhook-cert") on node "ingress-addon-legacy-223875" DevicePath ""
	Jan 30 19:44:22 ingress-addon-legacy-223875 kubelet[1426]: W0130 19:44:22.751814    1426 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/527c6034-66bd-4f6c-982a-f2ad5c1ce3bf/volumes" does not exist
	
	
	==> storage-provisioner [56c1e4b16c95a7016aef014c7f841fad2dc0ebac475784729ca27c60cb72c332] <==
	I0130 19:40:33.714657       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 19:40:33.723237       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 19:40:33.723305       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 19:40:33.735110       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 19:40:33.735812       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2e54d608-d62a-4b29-b480-61c7c89cb499", APIVersion:"v1", ResourceVersion:"390", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-223875_54bc35cc-638b-4c43-8e23-dd145ef4fdf7 became leader
	I0130 19:40:33.735889       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-223875_54bc35cc-638b-4c43-8e23-dd145ef4fdf7!
	I0130 19:40:33.838135       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-223875_54bc35cc-638b-4c43-8e23-dd145ef4fdf7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-223875 -n ingress-addon-legacy-223875
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-223875 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (174.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (694.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-572652
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-572652
E0130 19:53:07.773387   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 19:53:39.710630   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-572652: exit status 82 (2m0.261841485s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-572652"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-572652" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-572652 --wait=true -v=8 --alsologtostderr
E0130 19:55:02.758503   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
E0130 19:56:31.182602   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 19:58:07.773517   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 19:58:39.710393   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
E0130 19:59:30.818384   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 20:01:31.182064   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 20:02:54.227540   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 20:03:07.773272   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 20:03:39.710648   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-572652 --wait=true -v=8 --alsologtostderr: (9m31.577379343s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-572652
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-572652 -n multinode-572652
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-572652 logs -n 25: (1.533835694s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-572652 ssh -n                                                                 | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | multinode-572652-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-572652 cp multinode-572652-m02:/home/docker/cp-test.txt                       | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile652618288/001/cp-test_multinode-572652-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-572652 ssh -n                                                                 | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | multinode-572652-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-572652 cp multinode-572652-m02:/home/docker/cp-test.txt                       | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | multinode-572652:/home/docker/cp-test_multinode-572652-m02_multinode-572652.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-572652 ssh -n                                                                 | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | multinode-572652-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-572652 ssh -n multinode-572652 sudo cat                                       | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | /home/docker/cp-test_multinode-572652-m02_multinode-572652.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-572652 cp multinode-572652-m02:/home/docker/cp-test.txt                       | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | multinode-572652-m03:/home/docker/cp-test_multinode-572652-m02_multinode-572652-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-572652 ssh -n                                                                 | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | multinode-572652-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-572652 ssh -n multinode-572652-m03 sudo cat                                   | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | /home/docker/cp-test_multinode-572652-m02_multinode-572652-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-572652 cp testdata/cp-test.txt                                                | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | multinode-572652-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-572652 ssh -n                                                                 | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | multinode-572652-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-572652 cp multinode-572652-m03:/home/docker/cp-test.txt                       | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile652618288/001/cp-test_multinode-572652-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-572652 ssh -n                                                                 | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | multinode-572652-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-572652 cp multinode-572652-m03:/home/docker/cp-test.txt                       | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | multinode-572652:/home/docker/cp-test_multinode-572652-m03_multinode-572652.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-572652 ssh -n                                                                 | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | multinode-572652-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-572652 ssh -n multinode-572652 sudo cat                                       | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | /home/docker/cp-test_multinode-572652-m03_multinode-572652.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-572652 cp multinode-572652-m03:/home/docker/cp-test.txt                       | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | multinode-572652-m02:/home/docker/cp-test_multinode-572652-m03_multinode-572652-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-572652 ssh -n                                                                 | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | multinode-572652-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-572652 ssh -n multinode-572652-m02 sudo cat                                   | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | /home/docker/cp-test_multinode-572652-m03_multinode-572652-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-572652 node stop m03                                                          | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	| node    | multinode-572652 node start                                                             | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC | 30 Jan 24 19:52 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-572652                                                                | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC |                     |
	| stop    | -p multinode-572652                                                                     | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:52 UTC |                     |
	| start   | -p multinode-572652                                                                     | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 19:54 UTC | 30 Jan 24 20:04 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-572652                                                                | multinode-572652 | jenkins | v1.32.0 | 30 Jan 24 20:04 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 19:54:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 19:54:41.452548   28131 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:54:41.452700   28131 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:54:41.452709   28131 out.go:309] Setting ErrFile to fd 2...
	I0130 19:54:41.452714   28131 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:54:41.452927   28131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 19:54:41.453520   28131 out.go:303] Setting JSON to false
	I0130 19:54:41.454445   28131 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2227,"bootTime":1706642255,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 19:54:41.454496   28131 start.go:138] virtualization: kvm guest
	I0130 19:54:41.456777   28131 out.go:177] * [multinode-572652] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 19:54:41.458462   28131 out.go:177]   - MINIKUBE_LOCATION=18007
	I0130 19:54:41.458461   28131 notify.go:220] Checking for updates...
	I0130 19:54:41.459626   28131 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 19:54:41.460835   28131 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 19:54:41.462051   28131 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 19:54:41.463429   28131 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 19:54:41.464759   28131 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 19:54:41.466770   28131 config.go:182] Loaded profile config "multinode-572652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:54:41.466851   28131 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 19:54:41.467282   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:54:41.467345   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:54:41.481494   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37461
	I0130 19:54:41.481870   28131 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:54:41.482413   28131 main.go:141] libmachine: Using API Version  1
	I0130 19:54:41.482438   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:54:41.482781   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:54:41.482959   28131 main.go:141] libmachine: (multinode-572652) Calling .DriverName
	I0130 19:54:41.519163   28131 out.go:177] * Using the kvm2 driver based on existing profile
	I0130 19:54:41.520299   28131 start.go:298] selected driver: kvm2
	I0130 19:54:41.520313   28131 start.go:902] validating driver "kvm2" against &{Name:multinode-572652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-572652 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 19:54:41.520458   28131 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 19:54:41.520739   28131 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:54:41.520808   28131 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18007-4458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 19:54:41.535256   28131 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 19:54:41.536089   28131 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 19:54:41.536156   28131 cni.go:84] Creating CNI manager for ""
	I0130 19:54:41.536168   28131 cni.go:136] 3 nodes found, recommending kindnet
	I0130 19:54:41.536175   28131 start_flags.go:321] config:
	{Name:multinode-572652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-572652 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 19:54:41.536373   28131 iso.go:125] acquiring lock: {Name:mk072ab123730f3058e85a91672f85e887bd47af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:54:41.538364   28131 out.go:177] * Starting control plane node multinode-572652 in cluster multinode-572652
	I0130 19:54:41.539679   28131 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 19:54:41.539709   28131 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0130 19:54:41.539716   28131 cache.go:56] Caching tarball of preloaded images
	I0130 19:54:41.539811   28131 preload.go:174] Found /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 19:54:41.539826   28131 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0130 19:54:41.539974   28131 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/config.json ...
	I0130 19:54:41.540156   28131 start.go:365] acquiring machines lock for multinode-572652: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 19:54:41.540200   28131 start.go:369] acquired machines lock for "multinode-572652" in 25.005µs
	I0130 19:54:41.540227   28131 start.go:96] Skipping create...Using existing machine configuration
	I0130 19:54:41.540236   28131 fix.go:54] fixHost starting: 
	I0130 19:54:41.540501   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:54:41.540539   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:54:41.554082   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I0130 19:54:41.554505   28131 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:54:41.554950   28131 main.go:141] libmachine: Using API Version  1
	I0130 19:54:41.554972   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:54:41.555249   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:54:41.555422   28131 main.go:141] libmachine: (multinode-572652) Calling .DriverName
	I0130 19:54:41.555539   28131 main.go:141] libmachine: (multinode-572652) Calling .GetState
	I0130 19:54:41.556914   28131 fix.go:102] recreateIfNeeded on multinode-572652: state=Running err=<nil>
	W0130 19:54:41.556947   28131 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 19:54:41.559832   28131 out.go:177] * Updating the running kvm2 "multinode-572652" VM ...
	I0130 19:54:41.561073   28131 machine.go:88] provisioning docker machine ...
	I0130 19:54:41.561090   28131 main.go:141] libmachine: (multinode-572652) Calling .DriverName
	I0130 19:54:41.561273   28131 main.go:141] libmachine: (multinode-572652) Calling .GetMachineName
	I0130 19:54:41.561415   28131 buildroot.go:166] provisioning hostname "multinode-572652"
	I0130 19:54:41.561439   28131 main.go:141] libmachine: (multinode-572652) Calling .GetMachineName
	I0130 19:54:41.561560   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHHostname
	I0130 19:54:41.563652   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:54:41.564079   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:49:29 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 19:54:41.564107   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:54:41.564204   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHPort
	I0130 19:54:41.564377   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 19:54:41.564522   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 19:54:41.564646   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHUsername
	I0130 19:54:41.564776   28131 main.go:141] libmachine: Using SSH client type: native
	I0130 19:54:41.565106   28131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0130 19:54:41.565122   28131 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-572652 && echo "multinode-572652" | sudo tee /etc/hostname
	I0130 19:55:00.115507   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:55:06.195567   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:55:09.267508   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:55:15.347538   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:55:18.419480   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:55:24.503504   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:55:27.571576   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:55:33.651544   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:55:36.723506   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:55:42.807527   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:55:45.875486   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:55:51.955516   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:55:55.027525   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:56:01.107540   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:56:04.179537   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:56:10.259567   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:56:13.331520   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:56:19.411530   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:56:22.483469   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:56:28.563521   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:56:31.635498   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:56:37.715521   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:56:40.787506   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:56:46.867539   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:56:49.939517   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:56:56.019585   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:56:59.091575   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:57:05.171522   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:57:08.243542   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:57:14.323505   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:57:17.395522   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:57:23.475523   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:57:26.547470   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:57:32.627557   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:57:35.699543   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:57:41.779513   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:57:44.851502   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:57:50.931519   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:57:54.003521   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:58:00.083575   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:58:03.155504   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:58:09.235499   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:58:12.307511   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:58:18.387519   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:58:21.459499   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:58:27.539523   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:58:30.611547   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:58:36.691524   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:58:39.763471   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:58:45.843558   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:58:48.915518   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:58:54.995585   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:58:58.067522   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:59:04.147538   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:59:07.219502   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:59:13.299498   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:59:16.371461   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:59:22.451489   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:59:25.523520   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:59:31.603497   28131 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I0130 19:59:34.605998   28131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 19:59:34.606028   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHHostname
	I0130 19:59:34.607906   28131 machine.go:91] provisioned docker machine in 4m53.046816344s
	I0130 19:59:34.607944   28131 fix.go:56] fixHost completed within 4m53.067708009s
	I0130 19:59:34.607948   28131 start.go:83] releasing machines lock for "multinode-572652", held for 4m53.067738241s
	W0130 19:59:34.607966   28131 start.go:694] error starting host: provision: host is not running
	W0130 19:59:34.608059   28131 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0130 19:59:34.608072   28131 start.go:709] Will try again in 5 seconds ...
	I0130 19:59:39.611007   28131 start.go:365] acquiring machines lock for multinode-572652: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 19:59:39.611109   28131 start.go:369] acquired machines lock for "multinode-572652" in 64.09µs
	I0130 19:59:39.611140   28131 start.go:96] Skipping create...Using existing machine configuration
	I0130 19:59:39.611148   28131 fix.go:54] fixHost starting: 
	I0130 19:59:39.611457   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:59:39.611479   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:59:39.625866   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0130 19:59:39.626255   28131 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:59:39.626744   28131 main.go:141] libmachine: Using API Version  1
	I0130 19:59:39.626769   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:59:39.627098   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:59:39.627310   28131 main.go:141] libmachine: (multinode-572652) Calling .DriverName
	I0130 19:59:39.627454   28131 main.go:141] libmachine: (multinode-572652) Calling .GetState
	I0130 19:59:39.629076   28131 fix.go:102] recreateIfNeeded on multinode-572652: state=Stopped err=<nil>
	I0130 19:59:39.629106   28131 main.go:141] libmachine: (multinode-572652) Calling .DriverName
	W0130 19:59:39.629290   28131 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 19:59:39.632365   28131 out.go:177] * Restarting existing kvm2 VM for "multinode-572652" ...
	I0130 19:59:39.633805   28131 main.go:141] libmachine: (multinode-572652) Calling .Start
	I0130 19:59:39.633979   28131 main.go:141] libmachine: (multinode-572652) Ensuring networks are active...
	I0130 19:59:39.634677   28131 main.go:141] libmachine: (multinode-572652) Ensuring network default is active
	I0130 19:59:39.635011   28131 main.go:141] libmachine: (multinode-572652) Ensuring network mk-multinode-572652 is active
	I0130 19:59:39.635299   28131 main.go:141] libmachine: (multinode-572652) Getting domain xml...
	I0130 19:59:39.635919   28131 main.go:141] libmachine: (multinode-572652) Creating domain...
	I0130 19:59:40.805664   28131 main.go:141] libmachine: (multinode-572652) Waiting to get IP...
	I0130 19:59:40.806629   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:40.807084   28131 main.go:141] libmachine: (multinode-572652) DBG | unable to find current IP address of domain multinode-572652 in network mk-multinode-572652
	I0130 19:59:40.807179   28131 main.go:141] libmachine: (multinode-572652) DBG | I0130 19:59:40.807068   28900 retry.go:31] will retry after 293.372721ms: waiting for machine to come up
	I0130 19:59:41.102671   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:41.103244   28131 main.go:141] libmachine: (multinode-572652) DBG | unable to find current IP address of domain multinode-572652 in network mk-multinode-572652
	I0130 19:59:41.103283   28131 main.go:141] libmachine: (multinode-572652) DBG | I0130 19:59:41.103198   28900 retry.go:31] will retry after 339.552173ms: waiting for machine to come up
	I0130 19:59:41.444819   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:41.445177   28131 main.go:141] libmachine: (multinode-572652) DBG | unable to find current IP address of domain multinode-572652 in network mk-multinode-572652
	I0130 19:59:41.445203   28131 main.go:141] libmachine: (multinode-572652) DBG | I0130 19:59:41.445135   28900 retry.go:31] will retry after 332.547917ms: waiting for machine to come up
	I0130 19:59:41.779752   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:41.780174   28131 main.go:141] libmachine: (multinode-572652) DBG | unable to find current IP address of domain multinode-572652 in network mk-multinode-572652
	I0130 19:59:41.780223   28131 main.go:141] libmachine: (multinode-572652) DBG | I0130 19:59:41.780158   28900 retry.go:31] will retry after 381.501907ms: waiting for machine to come up
	I0130 19:59:42.163752   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:42.164181   28131 main.go:141] libmachine: (multinode-572652) DBG | unable to find current IP address of domain multinode-572652 in network mk-multinode-572652
	I0130 19:59:42.164205   28131 main.go:141] libmachine: (multinode-572652) DBG | I0130 19:59:42.164139   28900 retry.go:31] will retry after 715.094625ms: waiting for machine to come up
	I0130 19:59:42.881240   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:42.881715   28131 main.go:141] libmachine: (multinode-572652) DBG | unable to find current IP address of domain multinode-572652 in network mk-multinode-572652
	I0130 19:59:42.881759   28131 main.go:141] libmachine: (multinode-572652) DBG | I0130 19:59:42.881682   28900 retry.go:31] will retry after 671.935309ms: waiting for machine to come up
	I0130 19:59:43.555721   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:43.556156   28131 main.go:141] libmachine: (multinode-572652) DBG | unable to find current IP address of domain multinode-572652 in network mk-multinode-572652
	I0130 19:59:43.556196   28131 main.go:141] libmachine: (multinode-572652) DBG | I0130 19:59:43.556110   28900 retry.go:31] will retry after 800.400484ms: waiting for machine to come up
	I0130 19:59:44.357875   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:44.358359   28131 main.go:141] libmachine: (multinode-572652) DBG | unable to find current IP address of domain multinode-572652 in network mk-multinode-572652
	I0130 19:59:44.358391   28131 main.go:141] libmachine: (multinode-572652) DBG | I0130 19:59:44.358304   28900 retry.go:31] will retry after 938.607515ms: waiting for machine to come up
	I0130 19:59:45.298531   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:45.298976   28131 main.go:141] libmachine: (multinode-572652) DBG | unable to find current IP address of domain multinode-572652 in network mk-multinode-572652
	I0130 19:59:45.299004   28131 main.go:141] libmachine: (multinode-572652) DBG | I0130 19:59:45.298945   28900 retry.go:31] will retry after 1.332501382s: waiting for machine to come up
	I0130 19:59:46.633439   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:46.633821   28131 main.go:141] libmachine: (multinode-572652) DBG | unable to find current IP address of domain multinode-572652 in network mk-multinode-572652
	I0130 19:59:46.633852   28131 main.go:141] libmachine: (multinode-572652) DBG | I0130 19:59:46.633771   28900 retry.go:31] will retry after 2.032225418s: waiting for machine to come up
	I0130 19:59:48.668360   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:48.668754   28131 main.go:141] libmachine: (multinode-572652) DBG | unable to find current IP address of domain multinode-572652 in network mk-multinode-572652
	I0130 19:59:48.668774   28131 main.go:141] libmachine: (multinode-572652) DBG | I0130 19:59:48.668728   28900 retry.go:31] will retry after 2.175285956s: waiting for machine to come up
	I0130 19:59:50.845743   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:50.846295   28131 main.go:141] libmachine: (multinode-572652) DBG | unable to find current IP address of domain multinode-572652 in network mk-multinode-572652
	I0130 19:59:50.846329   28131 main.go:141] libmachine: (multinode-572652) DBG | I0130 19:59:50.846246   28900 retry.go:31] will retry after 2.668720394s: waiting for machine to come up
	I0130 19:59:53.517247   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:53.517725   28131 main.go:141] libmachine: (multinode-572652) DBG | unable to find current IP address of domain multinode-572652 in network mk-multinode-572652
	I0130 19:59:53.517755   28131 main.go:141] libmachine: (multinode-572652) DBG | I0130 19:59:53.517684   28900 retry.go:31] will retry after 2.883461976s: waiting for machine to come up
	I0130 19:59:56.404019   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:56.404380   28131 main.go:141] libmachine: (multinode-572652) DBG | unable to find current IP address of domain multinode-572652 in network mk-multinode-572652
	I0130 19:59:56.404404   28131 main.go:141] libmachine: (multinode-572652) DBG | I0130 19:59:56.404331   28900 retry.go:31] will retry after 3.458118325s: waiting for machine to come up
	I0130 19:59:59.864271   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:59.864667   28131 main.go:141] libmachine: (multinode-572652) Found IP for machine: 192.168.39.186
	I0130 19:59:59.864688   28131 main.go:141] libmachine: (multinode-572652) Reserving static IP address...
	I0130 19:59:59.864700   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has current primary IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:59.865127   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "multinode-572652", mac: "52:54:00:8f:1f:80", ip: "192.168.39.186"} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 19:59:59.865162   28131 main.go:141] libmachine: (multinode-572652) DBG | skip adding static IP to network mk-multinode-572652 - found existing host DHCP lease matching {name: "multinode-572652", mac: "52:54:00:8f:1f:80", ip: "192.168.39.186"}
	I0130 19:59:59.865180   28131 main.go:141] libmachine: (multinode-572652) Reserved static IP address: 192.168.39.186
	I0130 19:59:59.865193   28131 main.go:141] libmachine: (multinode-572652) Waiting for SSH to be available...
	I0130 19:59:59.865209   28131 main.go:141] libmachine: (multinode-572652) DBG | Getting to WaitForSSH function...
	I0130 19:59:59.867151   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:59.867488   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 19:59:59.867516   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:59.867659   28131 main.go:141] libmachine: (multinode-572652) DBG | Using SSH client type: external
	I0130 19:59:59.867682   28131 main.go:141] libmachine: (multinode-572652) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652/id_rsa (-rw-------)
	I0130 19:59:59.867704   28131 main.go:141] libmachine: (multinode-572652) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 19:59:59.867714   28131 main.go:141] libmachine: (multinode-572652) DBG | About to run SSH command:
	I0130 19:59:59.867722   28131 main.go:141] libmachine: (multinode-572652) DBG | exit 0
	I0130 19:59:59.955635   28131 main.go:141] libmachine: (multinode-572652) DBG | SSH cmd err, output: <nil>: 
	I0130 19:59:59.955963   28131 main.go:141] libmachine: (multinode-572652) Calling .GetConfigRaw
	I0130 19:59:59.956546   28131 main.go:141] libmachine: (multinode-572652) Calling .GetIP
	I0130 19:59:59.958727   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:59.959031   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 19:59:59.959074   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:59.959239   28131 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/config.json ...
	I0130 19:59:59.959452   28131 machine.go:88] provisioning docker machine ...
	I0130 19:59:59.959469   28131 main.go:141] libmachine: (multinode-572652) Calling .DriverName
	I0130 19:59:59.959633   28131 main.go:141] libmachine: (multinode-572652) Calling .GetMachineName
	I0130 19:59:59.959760   28131 buildroot.go:166] provisioning hostname "multinode-572652"
	I0130 19:59:59.959826   28131 main.go:141] libmachine: (multinode-572652) Calling .GetMachineName
	I0130 19:59:59.959965   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHHostname
	I0130 19:59:59.962013   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:59.962347   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 19:59:59.962378   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:59:59.962476   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHPort
	I0130 19:59:59.962644   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 19:59:59.962753   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 19:59:59.962877   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHUsername
	I0130 19:59:59.963014   28131 main.go:141] libmachine: Using SSH client type: native
	I0130 19:59:59.963347   28131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0130 19:59:59.963361   28131 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-572652 && echo "multinode-572652" | sudo tee /etc/hostname
	I0130 20:00:00.088346   28131 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-572652
	
	I0130 20:00:00.088375   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHHostname
	I0130 20:00:00.091221   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:00.091584   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 20:00:00.091609   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:00.091737   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHPort
	I0130 20:00:00.091915   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 20:00:00.092064   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 20:00:00.092191   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHUsername
	I0130 20:00:00.092363   28131 main.go:141] libmachine: Using SSH client type: native
	I0130 20:00:00.092705   28131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0130 20:00:00.092778   28131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-572652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-572652/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-572652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:00:00.208094   28131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:00:00.208125   28131 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:00:00.208148   28131 buildroot.go:174] setting up certificates
	I0130 20:00:00.208161   28131 provision.go:83] configureAuth start
	I0130 20:00:00.208179   28131 main.go:141] libmachine: (multinode-572652) Calling .GetMachineName
	I0130 20:00:00.208449   28131 main.go:141] libmachine: (multinode-572652) Calling .GetIP
	I0130 20:00:00.210915   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:00.211291   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 20:00:00.211322   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:00.211441   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHHostname
	I0130 20:00:00.213545   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:00.213883   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 20:00:00.213902   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:00.214040   28131 provision.go:138] copyHostCerts
	I0130 20:00:00.214071   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:00:00.214097   28131 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:00:00.214106   28131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:00:00.214188   28131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:00:00.214283   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:00:00.214314   28131 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:00:00.214324   28131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:00:00.214362   28131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:00:00.214430   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:00:00.214454   28131 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:00:00.214460   28131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:00:00.214495   28131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:00:00.214568   28131 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.multinode-572652 san=[192.168.39.186 192.168.39.186 localhost 127.0.0.1 minikube multinode-572652]
	I0130 20:00:00.444896   28131 provision.go:172] copyRemoteCerts
	I0130 20:00:00.444964   28131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:00:00.444990   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHHostname
	I0130 20:00:00.447793   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:00.448144   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 20:00:00.448167   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:00.448322   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHPort
	I0130 20:00:00.448518   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 20:00:00.448685   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHUsername
	I0130 20:00:00.448800   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652/id_rsa Username:docker}
	I0130 20:00:00.532169   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0130 20:00:00.532235   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:00:00.554956   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0130 20:00:00.555034   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0130 20:00:00.577282   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0130 20:00:00.577351   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 20:00:00.600041   28131 provision.go:86] duration metric: configureAuth took 391.860847ms
	I0130 20:00:00.600084   28131 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:00:00.600322   28131 config.go:182] Loaded profile config "multinode-572652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:00:00.600393   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHHostname
	I0130 20:00:00.603094   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:00.603512   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 20:00:00.603538   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:00.603742   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHPort
	I0130 20:00:00.603908   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 20:00:00.604048   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 20:00:00.604203   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHUsername
	I0130 20:00:00.604385   28131 main.go:141] libmachine: Using SSH client type: native
	I0130 20:00:00.604775   28131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0130 20:00:00.604797   28131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:00:00.916521   28131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:00:00.916549   28131 machine.go:91] provisioned docker machine in 957.08254ms
	I0130 20:00:00.916564   28131 start.go:300] post-start starting for "multinode-572652" (driver="kvm2")
	I0130 20:00:00.916578   28131 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:00:00.916617   28131 main.go:141] libmachine: (multinode-572652) Calling .DriverName
	I0130 20:00:00.916913   28131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:00:00.916945   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHHostname
	I0130 20:00:00.919553   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:00.919956   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 20:00:00.919997   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:00.920166   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHPort
	I0130 20:00:00.920362   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 20:00:00.920518   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHUsername
	I0130 20:00:00.920666   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652/id_rsa Username:docker}
	I0130 20:00:01.005681   28131 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:00:01.009738   28131 command_runner.go:130] > NAME=Buildroot
	I0130 20:00:01.009760   28131 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0130 20:00:01.009767   28131 command_runner.go:130] > ID=buildroot
	I0130 20:00:01.009775   28131 command_runner.go:130] > VERSION_ID=2021.02.12
	I0130 20:00:01.009782   28131 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0130 20:00:01.009972   28131 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:00:01.009990   28131 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:00:01.010076   28131 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:00:01.010162   28131 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:00:01.010172   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> /etc/ssl/certs/116672.pem
	I0130 20:00:01.010253   28131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:00:01.019549   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:00:01.041297   28131 start.go:303] post-start completed in 124.717438ms
	I0130 20:00:01.041324   28131 fix.go:56] fixHost completed within 21.430175364s
	I0130 20:00:01.041349   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHHostname
	I0130 20:00:01.044079   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:01.044435   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 20:00:01.044474   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:01.044641   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHPort
	I0130 20:00:01.044825   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 20:00:01.044995   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 20:00:01.045166   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHUsername
	I0130 20:00:01.045315   28131 main.go:141] libmachine: Using SSH client type: native
	I0130 20:00:01.045668   28131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0130 20:00:01.045683   28131 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:00:01.156326   28131 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706644801.106596488
	
	I0130 20:00:01.156359   28131 fix.go:206] guest clock: 1706644801.106596488
	I0130 20:00:01.156370   28131 fix.go:219] Guest: 2024-01-30 20:00:01.106596488 +0000 UTC Remote: 2024-01-30 20:00:01.041329312 +0000 UTC m=+319.637037456 (delta=65.267176ms)
	I0130 20:00:01.156403   28131 fix.go:190] guest clock delta is within tolerance: 65.267176ms
	I0130 20:00:01.156408   28131 start.go:83] releasing machines lock for "multinode-572652", held for 21.545291605s
	I0130 20:00:01.156437   28131 main.go:141] libmachine: (multinode-572652) Calling .DriverName
	I0130 20:00:01.156684   28131 main.go:141] libmachine: (multinode-572652) Calling .GetIP
	I0130 20:00:01.159041   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:01.159384   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 20:00:01.159423   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:01.159616   28131 main.go:141] libmachine: (multinode-572652) Calling .DriverName
	I0130 20:00:01.160094   28131 main.go:141] libmachine: (multinode-572652) Calling .DriverName
	I0130 20:00:01.160255   28131 main.go:141] libmachine: (multinode-572652) Calling .DriverName
	I0130 20:00:01.160331   28131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:00:01.160387   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHHostname
	I0130 20:00:01.160410   28131 ssh_runner.go:195] Run: cat /version.json
	I0130 20:00:01.160426   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHHostname
	I0130 20:00:01.162741   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:01.162984   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:01.163113   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 20:00:01.163139   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:01.163260   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHPort
	I0130 20:00:01.163455   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 20:00:01.163445   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 20:00:01.163513   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:01.163582   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHPort
	I0130 20:00:01.163636   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHUsername
	I0130 20:00:01.163720   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 20:00:01.163786   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652/id_rsa Username:docker}
	I0130 20:00:01.163857   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHUsername
	I0130 20:00:01.163959   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652/id_rsa Username:docker}
	I0130 20:00:01.263374   28131 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0130 20:00:01.263427   28131 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0130 20:00:01.263604   28131 ssh_runner.go:195] Run: systemctl --version
	I0130 20:00:01.269014   28131 command_runner.go:130] > systemd 247 (247)
	I0130 20:00:01.269051   28131 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0130 20:00:01.269115   28131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:00:01.410891   28131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0130 20:00:01.417952   28131 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0130 20:00:01.418271   28131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:00:01.418331   28131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:00:01.432882   28131 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0130 20:00:01.432922   28131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:00:01.432933   28131 start.go:475] detecting cgroup driver to use...
	I0130 20:00:01.432981   28131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:00:01.450660   28131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:00:01.466326   28131 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:00:01.466378   28131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:00:01.481780   28131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:00:01.497308   28131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:00:01.513032   28131 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0130 20:00:01.610754   28131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:00:01.723194   28131 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0130 20:00:01.723298   28131 docker.go:233] disabling docker service ...
	I0130 20:00:01.723352   28131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:00:01.735746   28131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:00:01.746978   28131 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0130 20:00:01.747052   28131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:00:01.759855   28131 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0130 20:00:01.850546   28131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:00:01.960738   28131 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0130 20:00:01.960774   28131 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0130 20:00:01.960943   28131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:00:01.972790   28131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:00:01.989107   28131 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0130 20:00:01.989433   28131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:00:01.989493   28131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:00:01.998658   28131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:00:01.998733   28131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:00:02.007992   28131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:00:02.016968   28131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:00:02.026032   28131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:00:02.035444   28131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:00:02.043466   28131 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:00:02.043499   28131 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:00:02.043531   28131 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:00:02.056840   28131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:00:02.065577   28131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:00:02.177137   28131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:00:02.343292   28131 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:00:02.343365   28131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:00:02.349748   28131 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0130 20:00:02.349770   28131 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0130 20:00:02.349782   28131 command_runner.go:130] > Device: 16h/22d	Inode: 748         Links: 1
	I0130 20:00:02.349789   28131 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0130 20:00:02.349797   28131 command_runner.go:130] > Access: 2024-01-30 20:00:02.278662116 +0000
	I0130 20:00:02.349807   28131 command_runner.go:130] > Modify: 2024-01-30 20:00:02.278662116 +0000
	I0130 20:00:02.349819   28131 command_runner.go:130] > Change: 2024-01-30 20:00:02.278662116 +0000
	I0130 20:00:02.349827   28131 command_runner.go:130] >  Birth: -
	I0130 20:00:02.349845   28131 start.go:543] Will wait 60s for crictl version
	I0130 20:00:02.349889   28131 ssh_runner.go:195] Run: which crictl
	I0130 20:00:02.353568   28131 command_runner.go:130] > /usr/bin/crictl
	I0130 20:00:02.353622   28131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:00:02.394186   28131 command_runner.go:130] > Version:  0.1.0
	I0130 20:00:02.394223   28131 command_runner.go:130] > RuntimeName:  cri-o
	I0130 20:00:02.394230   28131 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0130 20:00:02.394238   28131 command_runner.go:130] > RuntimeApiVersion:  v1
	I0130 20:00:02.394295   28131 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:00:02.394371   28131 ssh_runner.go:195] Run: crio --version
	I0130 20:00:02.446016   28131 command_runner.go:130] > crio version 1.24.1
	I0130 20:00:02.446038   28131 command_runner.go:130] > Version:          1.24.1
	I0130 20:00:02.446045   28131 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0130 20:00:02.446050   28131 command_runner.go:130] > GitTreeState:     dirty
	I0130 20:00:02.446060   28131 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0130 20:00:02.446068   28131 command_runner.go:130] > GoVersion:        go1.19.9
	I0130 20:00:02.446074   28131 command_runner.go:130] > Compiler:         gc
	I0130 20:00:02.446082   28131 command_runner.go:130] > Platform:         linux/amd64
	I0130 20:00:02.446090   28131 command_runner.go:130] > Linkmode:         dynamic
	I0130 20:00:02.446101   28131 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0130 20:00:02.446109   28131 command_runner.go:130] > SeccompEnabled:   true
	I0130 20:00:02.446117   28131 command_runner.go:130] > AppArmorEnabled:  false
	I0130 20:00:02.446237   28131 ssh_runner.go:195] Run: crio --version
	I0130 20:00:02.488523   28131 command_runner.go:130] > crio version 1.24.1
	I0130 20:00:02.488554   28131 command_runner.go:130] > Version:          1.24.1
	I0130 20:00:02.488561   28131 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0130 20:00:02.488568   28131 command_runner.go:130] > GitTreeState:     dirty
	I0130 20:00:02.488578   28131 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0130 20:00:02.488587   28131 command_runner.go:130] > GoVersion:        go1.19.9
	I0130 20:00:02.488593   28131 command_runner.go:130] > Compiler:         gc
	I0130 20:00:02.488599   28131 command_runner.go:130] > Platform:         linux/amd64
	I0130 20:00:02.488608   28131 command_runner.go:130] > Linkmode:         dynamic
	I0130 20:00:02.488620   28131 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0130 20:00:02.488636   28131 command_runner.go:130] > SeccompEnabled:   true
	I0130 20:00:02.488640   28131 command_runner.go:130] > AppArmorEnabled:  false
	I0130 20:00:02.491881   28131 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 20:00:02.493142   28131 main.go:141] libmachine: (multinode-572652) Calling .GetIP
	I0130 20:00:02.495498   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:02.495773   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 20:00:02.495804   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:00:02.495995   28131 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 20:00:02.500036   28131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:00:02.511850   28131 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 20:00:02.511917   28131 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:00:02.547577   28131 command_runner.go:130] > {
	I0130 20:00:02.547601   28131 command_runner.go:130] >   "images": [
	I0130 20:00:02.547609   28131 command_runner.go:130] >     {
	I0130 20:00:02.547622   28131 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0130 20:00:02.547631   28131 command_runner.go:130] >       "repoTags": [
	I0130 20:00:02.547640   28131 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0130 20:00:02.547647   28131 command_runner.go:130] >       ],
	I0130 20:00:02.547654   28131 command_runner.go:130] >       "repoDigests": [
	I0130 20:00:02.547666   28131 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0130 20:00:02.547676   28131 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0130 20:00:02.547681   28131 command_runner.go:130] >       ],
	I0130 20:00:02.547685   28131 command_runner.go:130] >       "size": "750414",
	I0130 20:00:02.547690   28131 command_runner.go:130] >       "uid": {
	I0130 20:00:02.547696   28131 command_runner.go:130] >         "value": "65535"
	I0130 20:00:02.547704   28131 command_runner.go:130] >       },
	I0130 20:00:02.547711   28131 command_runner.go:130] >       "username": "",
	I0130 20:00:02.547723   28131 command_runner.go:130] >       "spec": null,
	I0130 20:00:02.547731   28131 command_runner.go:130] >       "pinned": false
	I0130 20:00:02.547737   28131 command_runner.go:130] >     }
	I0130 20:00:02.547744   28131 command_runner.go:130] >   ]
	I0130 20:00:02.547750   28131 command_runner.go:130] > }
	I0130 20:00:02.547876   28131 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 20:00:02.547946   28131 ssh_runner.go:195] Run: which lz4
	I0130 20:00:02.551608   28131 command_runner.go:130] > /usr/bin/lz4
	I0130 20:00:02.551633   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0130 20:00:02.551705   28131 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 20:00:02.555683   28131 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:00:02.555721   28131 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:00:02.555735   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 20:00:04.355261   28131 crio.go:444] Took 1.803569 seconds to copy over tarball
	I0130 20:00:04.355337   28131 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 20:00:07.410160   28131 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.054788073s)
	I0130 20:00:07.410190   28131 crio.go:451] Took 3.054899 seconds to extract the tarball
	I0130 20:00:07.410198   28131 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 20:00:07.450787   28131 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:00:07.499051   28131 command_runner.go:130] > {
	I0130 20:00:07.499072   28131 command_runner.go:130] >   "images": [
	I0130 20:00:07.499078   28131 command_runner.go:130] >     {
	I0130 20:00:07.499090   28131 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0130 20:00:07.499097   28131 command_runner.go:130] >       "repoTags": [
	I0130 20:00:07.499104   28131 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0130 20:00:07.499110   28131 command_runner.go:130] >       ],
	I0130 20:00:07.499117   28131 command_runner.go:130] >       "repoDigests": [
	I0130 20:00:07.499129   28131 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0130 20:00:07.499145   28131 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0130 20:00:07.499156   28131 command_runner.go:130] >       ],
	I0130 20:00:07.499165   28131 command_runner.go:130] >       "size": "65258016",
	I0130 20:00:07.499176   28131 command_runner.go:130] >       "uid": null,
	I0130 20:00:07.499185   28131 command_runner.go:130] >       "username": "",
	I0130 20:00:07.499202   28131 command_runner.go:130] >       "spec": null,
	I0130 20:00:07.499214   28131 command_runner.go:130] >       "pinned": false
	I0130 20:00:07.499228   28131 command_runner.go:130] >     },
	I0130 20:00:07.499237   28131 command_runner.go:130] >     {
	I0130 20:00:07.499249   28131 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0130 20:00:07.499260   28131 command_runner.go:130] >       "repoTags": [
	I0130 20:00:07.499279   28131 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0130 20:00:07.499289   28131 command_runner.go:130] >       ],
	I0130 20:00:07.499297   28131 command_runner.go:130] >       "repoDigests": [
	I0130 20:00:07.499314   28131 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0130 20:00:07.499330   28131 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0130 20:00:07.499340   28131 command_runner.go:130] >       ],
	I0130 20:00:07.499352   28131 command_runner.go:130] >       "size": "31470524",
	I0130 20:00:07.499363   28131 command_runner.go:130] >       "uid": null,
	I0130 20:00:07.499373   28131 command_runner.go:130] >       "username": "",
	I0130 20:00:07.499382   28131 command_runner.go:130] >       "spec": null,
	I0130 20:00:07.499390   28131 command_runner.go:130] >       "pinned": false
	I0130 20:00:07.499400   28131 command_runner.go:130] >     },
	I0130 20:00:07.499406   28131 command_runner.go:130] >     {
	I0130 20:00:07.499421   28131 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0130 20:00:07.499436   28131 command_runner.go:130] >       "repoTags": [
	I0130 20:00:07.499459   28131 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0130 20:00:07.499533   28131 command_runner.go:130] >       ],
	I0130 20:00:07.499552   28131 command_runner.go:130] >       "repoDigests": [
	I0130 20:00:07.499568   28131 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0130 20:00:07.499584   28131 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0130 20:00:07.499595   28131 command_runner.go:130] >       ],
	I0130 20:00:07.499605   28131 command_runner.go:130] >       "size": "53621675",
	I0130 20:00:07.499615   28131 command_runner.go:130] >       "uid": null,
	I0130 20:00:07.499625   28131 command_runner.go:130] >       "username": "",
	I0130 20:00:07.499633   28131 command_runner.go:130] >       "spec": null,
	I0130 20:00:07.499644   28131 command_runner.go:130] >       "pinned": false
	I0130 20:00:07.499651   28131 command_runner.go:130] >     },
	I0130 20:00:07.499659   28131 command_runner.go:130] >     {
	I0130 20:00:07.499671   28131 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0130 20:00:07.499681   28131 command_runner.go:130] >       "repoTags": [
	I0130 20:00:07.499693   28131 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0130 20:00:07.499703   28131 command_runner.go:130] >       ],
	I0130 20:00:07.499722   28131 command_runner.go:130] >       "repoDigests": [
	I0130 20:00:07.499739   28131 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0130 20:00:07.499754   28131 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0130 20:00:07.499775   28131 command_runner.go:130] >       ],
	I0130 20:00:07.499788   28131 command_runner.go:130] >       "size": "295456551",
	I0130 20:00:07.499795   28131 command_runner.go:130] >       "uid": {
	I0130 20:00:07.499802   28131 command_runner.go:130] >         "value": "0"
	I0130 20:00:07.499812   28131 command_runner.go:130] >       },
	I0130 20:00:07.499826   28131 command_runner.go:130] >       "username": "",
	I0130 20:00:07.499834   28131 command_runner.go:130] >       "spec": null,
	I0130 20:00:07.499845   28131 command_runner.go:130] >       "pinned": false
	I0130 20:00:07.499852   28131 command_runner.go:130] >     },
	I0130 20:00:07.499862   28131 command_runner.go:130] >     {
	I0130 20:00:07.499876   28131 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0130 20:00:07.499888   28131 command_runner.go:130] >       "repoTags": [
	I0130 20:00:07.499898   28131 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0130 20:00:07.499908   28131 command_runner.go:130] >       ],
	I0130 20:00:07.499917   28131 command_runner.go:130] >       "repoDigests": [
	I0130 20:00:07.499938   28131 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0130 20:00:07.499957   28131 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0130 20:00:07.499967   28131 command_runner.go:130] >       ],
	I0130 20:00:07.499977   28131 command_runner.go:130] >       "size": "127226832",
	I0130 20:00:07.499987   28131 command_runner.go:130] >       "uid": {
	I0130 20:00:07.499994   28131 command_runner.go:130] >         "value": "0"
	I0130 20:00:07.500004   28131 command_runner.go:130] >       },
	I0130 20:00:07.500013   28131 command_runner.go:130] >       "username": "",
	I0130 20:00:07.500024   28131 command_runner.go:130] >       "spec": null,
	I0130 20:00:07.500034   28131 command_runner.go:130] >       "pinned": false
	I0130 20:00:07.500042   28131 command_runner.go:130] >     },
	I0130 20:00:07.500049   28131 command_runner.go:130] >     {
	I0130 20:00:07.500061   28131 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0130 20:00:07.500074   28131 command_runner.go:130] >       "repoTags": [
	I0130 20:00:07.500090   28131 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0130 20:00:07.500097   28131 command_runner.go:130] >       ],
	I0130 20:00:07.500108   28131 command_runner.go:130] >       "repoDigests": [
	I0130 20:00:07.500122   28131 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0130 20:00:07.500144   28131 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0130 20:00:07.500154   28131 command_runner.go:130] >       ],
	I0130 20:00:07.500164   28131 command_runner.go:130] >       "size": "123261750",
	I0130 20:00:07.500174   28131 command_runner.go:130] >       "uid": {
	I0130 20:00:07.500184   28131 command_runner.go:130] >         "value": "0"
	I0130 20:00:07.500193   28131 command_runner.go:130] >       },
	I0130 20:00:07.500201   28131 command_runner.go:130] >       "username": "",
	I0130 20:00:07.500211   28131 command_runner.go:130] >       "spec": null,
	I0130 20:00:07.500219   28131 command_runner.go:130] >       "pinned": false
	I0130 20:00:07.500228   28131 command_runner.go:130] >     },
	I0130 20:00:07.500237   28131 command_runner.go:130] >     {
	I0130 20:00:07.500249   28131 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0130 20:00:07.500260   28131 command_runner.go:130] >       "repoTags": [
	I0130 20:00:07.500269   28131 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0130 20:00:07.500296   28131 command_runner.go:130] >       ],
	I0130 20:00:07.500369   28131 command_runner.go:130] >       "repoDigests": [
	I0130 20:00:07.500387   28131 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0130 20:00:07.500399   28131 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0130 20:00:07.500422   28131 command_runner.go:130] >       ],
	I0130 20:00:07.500434   28131 command_runner.go:130] >       "size": "74749335",
	I0130 20:00:07.500442   28131 command_runner.go:130] >       "uid": null,
	I0130 20:00:07.500453   28131 command_runner.go:130] >       "username": "",
	I0130 20:00:07.500461   28131 command_runner.go:130] >       "spec": null,
	I0130 20:00:07.500471   28131 command_runner.go:130] >       "pinned": false
	I0130 20:00:07.500478   28131 command_runner.go:130] >     },
	I0130 20:00:07.500487   28131 command_runner.go:130] >     {
	I0130 20:00:07.500504   28131 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0130 20:00:07.500515   28131 command_runner.go:130] >       "repoTags": [
	I0130 20:00:07.500526   28131 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0130 20:00:07.500533   28131 command_runner.go:130] >       ],
	I0130 20:00:07.500543   28131 command_runner.go:130] >       "repoDigests": [
	I0130 20:00:07.500574   28131 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0130 20:00:07.500590   28131 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0130 20:00:07.500600   28131 command_runner.go:130] >       ],
	I0130 20:00:07.500612   28131 command_runner.go:130] >       "size": "61551410",
	I0130 20:00:07.500622   28131 command_runner.go:130] >       "uid": {
	I0130 20:00:07.500642   28131 command_runner.go:130] >         "value": "0"
	I0130 20:00:07.500651   28131 command_runner.go:130] >       },
	I0130 20:00:07.500660   28131 command_runner.go:130] >       "username": "",
	I0130 20:00:07.500669   28131 command_runner.go:130] >       "spec": null,
	I0130 20:00:07.500677   28131 command_runner.go:130] >       "pinned": false
	I0130 20:00:07.500686   28131 command_runner.go:130] >     },
	I0130 20:00:07.500693   28131 command_runner.go:130] >     {
	I0130 20:00:07.500707   28131 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0130 20:00:07.500717   28131 command_runner.go:130] >       "repoTags": [
	I0130 20:00:07.500729   28131 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0130 20:00:07.500736   28131 command_runner.go:130] >       ],
	I0130 20:00:07.500746   28131 command_runner.go:130] >       "repoDigests": [
	I0130 20:00:07.500760   28131 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0130 20:00:07.500775   28131 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0130 20:00:07.500793   28131 command_runner.go:130] >       ],
	I0130 20:00:07.500804   28131 command_runner.go:130] >       "size": "750414",
	I0130 20:00:07.500811   28131 command_runner.go:130] >       "uid": {
	I0130 20:00:07.500822   28131 command_runner.go:130] >         "value": "65535"
	I0130 20:00:07.500835   28131 command_runner.go:130] >       },
	I0130 20:00:07.500846   28131 command_runner.go:130] >       "username": "",
	I0130 20:00:07.500854   28131 command_runner.go:130] >       "spec": null,
	I0130 20:00:07.500864   28131 command_runner.go:130] >       "pinned": false
	I0130 20:00:07.500871   28131 command_runner.go:130] >     }
	I0130 20:00:07.500878   28131 command_runner.go:130] >   ]
	I0130 20:00:07.500886   28131 command_runner.go:130] > }
	I0130 20:00:07.501003   28131 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 20:00:07.501014   28131 cache_images.go:84] Images are preloaded, skipping loading
	I0130 20:00:07.501087   28131 ssh_runner.go:195] Run: crio config
	I0130 20:00:07.549674   28131 command_runner.go:130] ! time="2024-01-30 20:00:07.500055081Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0130 20:00:07.549813   28131 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0130 20:00:07.556638   28131 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0130 20:00:07.556664   28131 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0130 20:00:07.556674   28131 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0130 20:00:07.556680   28131 command_runner.go:130] > #
	I0130 20:00:07.556699   28131 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0130 20:00:07.556711   28131 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0130 20:00:07.556722   28131 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0130 20:00:07.556734   28131 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0130 20:00:07.556749   28131 command_runner.go:130] > # reload'.
	I0130 20:00:07.556755   28131 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0130 20:00:07.556761   28131 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0130 20:00:07.556767   28131 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0130 20:00:07.556775   28131 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0130 20:00:07.556779   28131 command_runner.go:130] > [crio]
	I0130 20:00:07.556786   28131 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0130 20:00:07.556791   28131 command_runner.go:130] > # containers images, in this directory.
	I0130 20:00:07.556798   28131 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0130 20:00:07.556809   28131 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0130 20:00:07.556816   28131 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0130 20:00:07.556822   28131 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0130 20:00:07.556830   28131 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0130 20:00:07.556835   28131 command_runner.go:130] > storage_driver = "overlay"
	I0130 20:00:07.556840   28131 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0130 20:00:07.556857   28131 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0130 20:00:07.556864   28131 command_runner.go:130] > storage_option = [
	I0130 20:00:07.556868   28131 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0130 20:00:07.556874   28131 command_runner.go:130] > ]
	I0130 20:00:07.556881   28131 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0130 20:00:07.556889   28131 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0130 20:00:07.556896   28131 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0130 20:00:07.556902   28131 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0130 20:00:07.556908   28131 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0130 20:00:07.556913   28131 command_runner.go:130] > # always happen on a node reboot
	I0130 20:00:07.556917   28131 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0130 20:00:07.556926   28131 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0130 20:00:07.556931   28131 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0130 20:00:07.556944   28131 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0130 20:00:07.556951   28131 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0130 20:00:07.556959   28131 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0130 20:00:07.556968   28131 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0130 20:00:07.556975   28131 command_runner.go:130] > # internal_wipe = true
	I0130 20:00:07.556983   28131 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0130 20:00:07.556991   28131 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0130 20:00:07.556997   28131 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0130 20:00:07.557004   28131 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0130 20:00:07.557011   28131 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0130 20:00:07.557014   28131 command_runner.go:130] > [crio.api]
	I0130 20:00:07.557021   28131 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0130 20:00:07.557028   28131 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0130 20:00:07.557034   28131 command_runner.go:130] > # IP address on which the stream server will listen.
	I0130 20:00:07.557040   28131 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0130 20:00:07.557053   28131 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0130 20:00:07.557065   28131 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0130 20:00:07.557074   28131 command_runner.go:130] > # stream_port = "0"
	I0130 20:00:07.557089   28131 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0130 20:00:07.557099   28131 command_runner.go:130] > # stream_enable_tls = false
	I0130 20:00:07.557108   28131 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0130 20:00:07.557119   28131 command_runner.go:130] > # stream_idle_timeout = ""
	I0130 20:00:07.557128   28131 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0130 20:00:07.557145   28131 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0130 20:00:07.557154   28131 command_runner.go:130] > # minutes.
	I0130 20:00:07.557160   28131 command_runner.go:130] > # stream_tls_cert = ""
	I0130 20:00:07.557173   28131 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0130 20:00:07.557182   28131 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0130 20:00:07.557186   28131 command_runner.go:130] > # stream_tls_key = ""
	I0130 20:00:07.557192   28131 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0130 20:00:07.557200   28131 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0130 20:00:07.557206   28131 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0130 20:00:07.557210   28131 command_runner.go:130] > # stream_tls_ca = ""
	I0130 20:00:07.557218   28131 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0130 20:00:07.557227   28131 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0130 20:00:07.557234   28131 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0130 20:00:07.557239   28131 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0130 20:00:07.557259   28131 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0130 20:00:07.557271   28131 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0130 20:00:07.557275   28131 command_runner.go:130] > [crio.runtime]
	I0130 20:00:07.557280   28131 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0130 20:00:07.557287   28131 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0130 20:00:07.557294   28131 command_runner.go:130] > # "nofile=1024:2048"
	I0130 20:00:07.557300   28131 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0130 20:00:07.557306   28131 command_runner.go:130] > # default_ulimits = [
	I0130 20:00:07.557310   28131 command_runner.go:130] > # ]
	I0130 20:00:07.557318   28131 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0130 20:00:07.557322   28131 command_runner.go:130] > # no_pivot = false
	I0130 20:00:07.557330   28131 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0130 20:00:07.557336   28131 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0130 20:00:07.557341   28131 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0130 20:00:07.557347   28131 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0130 20:00:07.557352   28131 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0130 20:00:07.557361   28131 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0130 20:00:07.557366   28131 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0130 20:00:07.557374   28131 command_runner.go:130] > # Cgroup setting for conmon
	I0130 20:00:07.557380   28131 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0130 20:00:07.557387   28131 command_runner.go:130] > conmon_cgroup = "pod"
	I0130 20:00:07.557393   28131 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0130 20:00:07.557402   28131 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0130 20:00:07.557411   28131 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0130 20:00:07.557415   28131 command_runner.go:130] > conmon_env = [
	I0130 20:00:07.557422   28131 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0130 20:00:07.557427   28131 command_runner.go:130] > ]
	I0130 20:00:07.557433   28131 command_runner.go:130] > # Additional environment variables to set for all the
	I0130 20:00:07.557440   28131 command_runner.go:130] > # containers. These are overridden if set in the
	I0130 20:00:07.557446   28131 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0130 20:00:07.557450   28131 command_runner.go:130] > # default_env = [
	I0130 20:00:07.557454   28131 command_runner.go:130] > # ]
	I0130 20:00:07.557459   28131 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0130 20:00:07.557464   28131 command_runner.go:130] > # selinux = false
	I0130 20:00:07.557470   28131 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0130 20:00:07.557479   28131 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0130 20:00:07.557484   28131 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0130 20:00:07.557491   28131 command_runner.go:130] > # seccomp_profile = ""
	I0130 20:00:07.557496   28131 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0130 20:00:07.557504   28131 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0130 20:00:07.557513   28131 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0130 20:00:07.557519   28131 command_runner.go:130] > # which might increase security.
	I0130 20:00:07.557524   28131 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0130 20:00:07.557533   28131 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0130 20:00:07.557539   28131 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0130 20:00:07.557547   28131 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0130 20:00:07.557553   28131 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0130 20:00:07.557560   28131 command_runner.go:130] > # This option supports live configuration reload.
	I0130 20:00:07.557565   28131 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0130 20:00:07.557573   28131 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0130 20:00:07.557577   28131 command_runner.go:130] > # the cgroup blockio controller.
	I0130 20:00:07.557584   28131 command_runner.go:130] > # blockio_config_file = ""
	I0130 20:00:07.557590   28131 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0130 20:00:07.557596   28131 command_runner.go:130] > # irqbalance daemon.
	I0130 20:00:07.557601   28131 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0130 20:00:07.557610   28131 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0130 20:00:07.557616   28131 command_runner.go:130] > # This option supports live configuration reload.
	I0130 20:00:07.557620   28131 command_runner.go:130] > # rdt_config_file = ""
	I0130 20:00:07.557638   28131 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0130 20:00:07.557649   28131 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0130 20:00:07.557655   28131 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0130 20:00:07.557662   28131 command_runner.go:130] > # separate_pull_cgroup = ""
	I0130 20:00:07.557668   28131 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0130 20:00:07.557675   28131 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0130 20:00:07.557681   28131 command_runner.go:130] > # will be added.
	I0130 20:00:07.557685   28131 command_runner.go:130] > # default_capabilities = [
	I0130 20:00:07.557690   28131 command_runner.go:130] > # 	"CHOWN",
	I0130 20:00:07.557694   28131 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0130 20:00:07.557700   28131 command_runner.go:130] > # 	"FSETID",
	I0130 20:00:07.557703   28131 command_runner.go:130] > # 	"FOWNER",
	I0130 20:00:07.557707   28131 command_runner.go:130] > # 	"SETGID",
	I0130 20:00:07.557711   28131 command_runner.go:130] > # 	"SETUID",
	I0130 20:00:07.557715   28131 command_runner.go:130] > # 	"SETPCAP",
	I0130 20:00:07.557721   28131 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0130 20:00:07.557725   28131 command_runner.go:130] > # 	"KILL",
	I0130 20:00:07.557730   28131 command_runner.go:130] > # ]
	I0130 20:00:07.557738   28131 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0130 20:00:07.557746   28131 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0130 20:00:07.557750   28131 command_runner.go:130] > # default_sysctls = [
	I0130 20:00:07.557756   28131 command_runner.go:130] > # ]
	I0130 20:00:07.557761   28131 command_runner.go:130] > # List of devices on the host that a
	I0130 20:00:07.557767   28131 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0130 20:00:07.557773   28131 command_runner.go:130] > # allowed_devices = [
	I0130 20:00:07.557777   28131 command_runner.go:130] > # 	"/dev/fuse",
	I0130 20:00:07.557783   28131 command_runner.go:130] > # ]
	I0130 20:00:07.557788   28131 command_runner.go:130] > # List of additional devices. specified as
	I0130 20:00:07.557795   28131 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0130 20:00:07.557802   28131 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0130 20:00:07.557835   28131 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0130 20:00:07.557843   28131 command_runner.go:130] > # additional_devices = [
	I0130 20:00:07.557850   28131 command_runner.go:130] > # ]
	I0130 20:00:07.557855   28131 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0130 20:00:07.557858   28131 command_runner.go:130] > # cdi_spec_dirs = [
	I0130 20:00:07.557862   28131 command_runner.go:130] > # 	"/etc/cdi",
	I0130 20:00:07.557871   28131 command_runner.go:130] > # 	"/var/run/cdi",
	I0130 20:00:07.557877   28131 command_runner.go:130] > # ]
	I0130 20:00:07.557883   28131 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0130 20:00:07.557891   28131 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0130 20:00:07.557896   28131 command_runner.go:130] > # Defaults to false.
	I0130 20:00:07.557900   28131 command_runner.go:130] > # device_ownership_from_security_context = false
	I0130 20:00:07.557907   28131 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0130 20:00:07.557915   28131 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0130 20:00:07.557919   28131 command_runner.go:130] > # hooks_dir = [
	I0130 20:00:07.557925   28131 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0130 20:00:07.557929   28131 command_runner.go:130] > # ]
	I0130 20:00:07.557937   28131 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0130 20:00:07.557946   28131 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0130 20:00:07.557951   28131 command_runner.go:130] > # its default mounts from the following two files:
	I0130 20:00:07.557955   28131 command_runner.go:130] > #
	I0130 20:00:07.557961   28131 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0130 20:00:07.557970   28131 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0130 20:00:07.557975   28131 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0130 20:00:07.557982   28131 command_runner.go:130] > #
	I0130 20:00:07.557991   28131 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0130 20:00:07.557997   28131 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0130 20:00:07.558005   28131 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0130 20:00:07.558011   28131 command_runner.go:130] > #      only add mounts it finds in this file.
	I0130 20:00:07.558016   28131 command_runner.go:130] > #
	I0130 20:00:07.558020   28131 command_runner.go:130] > # default_mounts_file = ""
	I0130 20:00:07.558026   28131 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0130 20:00:07.558035   28131 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0130 20:00:07.558044   28131 command_runner.go:130] > pids_limit = 1024
	I0130 20:00:07.558056   28131 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0130 20:00:07.558069   28131 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0130 20:00:07.558082   28131 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0130 20:00:07.558101   28131 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0130 20:00:07.558111   28131 command_runner.go:130] > # log_size_max = -1
	I0130 20:00:07.558121   28131 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0130 20:00:07.558131   28131 command_runner.go:130] > # log_to_journald = false
	I0130 20:00:07.558143   28131 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0130 20:00:07.558156   28131 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0130 20:00:07.558164   28131 command_runner.go:130] > # Path to directory for container attach sockets.
	I0130 20:00:07.558169   28131 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0130 20:00:07.558177   28131 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0130 20:00:07.558181   28131 command_runner.go:130] > # bind_mount_prefix = ""
	I0130 20:00:07.558189   28131 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0130 20:00:07.558193   28131 command_runner.go:130] > # read_only = false
	I0130 20:00:07.558201   28131 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0130 20:00:07.558208   28131 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0130 20:00:07.558214   28131 command_runner.go:130] > # live configuration reload.
	I0130 20:00:07.558218   28131 command_runner.go:130] > # log_level = "info"
	I0130 20:00:07.558226   28131 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0130 20:00:07.558233   28131 command_runner.go:130] > # This option supports live configuration reload.
	I0130 20:00:07.558239   28131 command_runner.go:130] > # log_filter = ""
	I0130 20:00:07.558245   28131 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0130 20:00:07.558253   28131 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0130 20:00:07.558257   28131 command_runner.go:130] > # separated by comma.
	I0130 20:00:07.558263   28131 command_runner.go:130] > # uid_mappings = ""
	I0130 20:00:07.558271   28131 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0130 20:00:07.558280   28131 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0130 20:00:07.558284   28131 command_runner.go:130] > # separated by comma.
	I0130 20:00:07.558290   28131 command_runner.go:130] > # gid_mappings = ""
	I0130 20:00:07.558296   28131 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0130 20:00:07.558304   28131 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0130 20:00:07.558310   28131 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0130 20:00:07.558317   28131 command_runner.go:130] > # minimum_mappable_uid = -1
	I0130 20:00:07.558323   28131 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0130 20:00:07.558331   28131 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0130 20:00:07.558337   28131 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0130 20:00:07.558342   28131 command_runner.go:130] > # minimum_mappable_gid = -1
	I0130 20:00:07.558348   28131 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0130 20:00:07.558356   28131 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0130 20:00:07.558362   28131 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0130 20:00:07.558366   28131 command_runner.go:130] > # ctr_stop_timeout = 30
	I0130 20:00:07.558372   28131 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0130 20:00:07.558380   28131 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0130 20:00:07.558387   28131 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0130 20:00:07.558393   28131 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0130 20:00:07.558398   28131 command_runner.go:130] > drop_infra_ctr = false
	I0130 20:00:07.558407   28131 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0130 20:00:07.558412   28131 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0130 20:00:07.558421   28131 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0130 20:00:07.558426   28131 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0130 20:00:07.558434   28131 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0130 20:00:07.558439   28131 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0130 20:00:07.558445   28131 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0130 20:00:07.558452   28131 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0130 20:00:07.558456   28131 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0130 20:00:07.558463   28131 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0130 20:00:07.558473   28131 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0130 20:00:07.558480   28131 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0130 20:00:07.558486   28131 command_runner.go:130] > # default_runtime = "runc"
	I0130 20:00:07.558491   28131 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0130 20:00:07.558498   28131 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0130 20:00:07.558510   28131 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0130 20:00:07.558518   28131 command_runner.go:130] > # creation as a file is not desired either.
	I0130 20:00:07.558526   28131 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0130 20:00:07.558534   28131 command_runner.go:130] > # the hostname is being managed dynamically.
	I0130 20:00:07.558539   28131 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0130 20:00:07.558542   28131 command_runner.go:130] > # ]
	I0130 20:00:07.558549   28131 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0130 20:00:07.558557   28131 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0130 20:00:07.558563   28131 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0130 20:00:07.558571   28131 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0130 20:00:07.558575   28131 command_runner.go:130] > #
	I0130 20:00:07.558580   28131 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0130 20:00:07.558585   28131 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0130 20:00:07.558589   28131 command_runner.go:130] > #  runtime_type = "oci"
	I0130 20:00:07.558594   28131 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0130 20:00:07.558601   28131 command_runner.go:130] > #  privileged_without_host_devices = false
	I0130 20:00:07.558606   28131 command_runner.go:130] > #  allowed_annotations = []
	I0130 20:00:07.558612   28131 command_runner.go:130] > # Where:
	I0130 20:00:07.558619   28131 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0130 20:00:07.558627   28131 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0130 20:00:07.558634   28131 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0130 20:00:07.558642   28131 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0130 20:00:07.558646   28131 command_runner.go:130] > #   in $PATH.
	I0130 20:00:07.558652   28131 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0130 20:00:07.558660   28131 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0130 20:00:07.558668   28131 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0130 20:00:07.558672   28131 command_runner.go:130] > #   state.
	I0130 20:00:07.558681   28131 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0130 20:00:07.558687   28131 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0130 20:00:07.558693   28131 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0130 20:00:07.558700   28131 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0130 20:00:07.558706   28131 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0130 20:00:07.558717   28131 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0130 20:00:07.558722   28131 command_runner.go:130] > #   The currently recognized values are:
	I0130 20:00:07.558728   28131 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0130 20:00:07.558738   28131 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0130 20:00:07.558747   28131 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0130 20:00:07.558755   28131 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0130 20:00:07.558763   28131 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0130 20:00:07.558771   28131 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0130 20:00:07.558777   28131 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0130 20:00:07.558786   28131 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0130 20:00:07.558792   28131 command_runner.go:130] > #   should be moved to the container's cgroup
	I0130 20:00:07.558798   28131 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0130 20:00:07.558803   28131 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0130 20:00:07.558807   28131 command_runner.go:130] > runtime_type = "oci"
	I0130 20:00:07.558812   28131 command_runner.go:130] > runtime_root = "/run/runc"
	I0130 20:00:07.558816   28131 command_runner.go:130] > runtime_config_path = ""
	I0130 20:00:07.558821   28131 command_runner.go:130] > monitor_path = ""
	I0130 20:00:07.558826   28131 command_runner.go:130] > monitor_cgroup = ""
	I0130 20:00:07.558830   28131 command_runner.go:130] > monitor_exec_cgroup = ""
	I0130 20:00:07.558836   28131 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0130 20:00:07.558842   28131 command_runner.go:130] > # running containers
	I0130 20:00:07.558850   28131 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0130 20:00:07.558862   28131 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0130 20:00:07.558908   28131 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0130 20:00:07.558916   28131 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0130 20:00:07.558922   28131 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0130 20:00:07.558926   28131 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0130 20:00:07.558931   28131 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0130 20:00:07.558936   28131 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0130 20:00:07.558942   28131 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0130 20:00:07.558946   28131 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0130 20:00:07.558953   28131 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0130 20:00:07.558960   28131 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0130 20:00:07.558966   28131 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0130 20:00:07.558975   28131 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0130 20:00:07.558982   28131 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0130 20:00:07.558992   28131 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0130 20:00:07.559004   28131 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0130 20:00:07.559014   28131 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0130 20:00:07.559019   28131 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0130 20:00:07.559030   28131 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0130 20:00:07.559034   28131 command_runner.go:130] > # Example:
	I0130 20:00:07.559043   28131 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0130 20:00:07.559055   28131 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0130 20:00:07.559064   28131 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0130 20:00:07.559075   28131 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0130 20:00:07.559084   28131 command_runner.go:130] > # cpuset = 0
	I0130 20:00:07.559091   28131 command_runner.go:130] > # cpushares = "0-1"
	I0130 20:00:07.559099   28131 command_runner.go:130] > # Where:
	I0130 20:00:07.559107   28131 command_runner.go:130] > # The workload name is workload-type.
	I0130 20:00:07.559120   28131 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0130 20:00:07.559128   28131 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0130 20:00:07.559141   28131 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0130 20:00:07.559155   28131 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0130 20:00:07.559168   28131 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0130 20:00:07.559175   28131 command_runner.go:130] > # 
	I0130 20:00:07.559182   28131 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0130 20:00:07.559187   28131 command_runner.go:130] > #
	I0130 20:00:07.559196   28131 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0130 20:00:07.559205   28131 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0130 20:00:07.559211   28131 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0130 20:00:07.559220   28131 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0130 20:00:07.559226   28131 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0130 20:00:07.559231   28131 command_runner.go:130] > [crio.image]
	I0130 20:00:07.559237   28131 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0130 20:00:07.559244   28131 command_runner.go:130] > # default_transport = "docker://"
	I0130 20:00:07.559250   28131 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0130 20:00:07.559259   28131 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0130 20:00:07.559279   28131 command_runner.go:130] > # global_auth_file = ""
	I0130 20:00:07.559291   28131 command_runner.go:130] > # The image used to instantiate infra containers.
	I0130 20:00:07.559301   28131 command_runner.go:130] > # This option supports live configuration reload.
	I0130 20:00:07.559311   28131 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0130 20:00:07.559323   28131 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0130 20:00:07.559330   28131 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0130 20:00:07.559335   28131 command_runner.go:130] > # This option supports live configuration reload.
	I0130 20:00:07.559340   28131 command_runner.go:130] > # pause_image_auth_file = ""
	I0130 20:00:07.559347   28131 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0130 20:00:07.559356   28131 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0130 20:00:07.559362   28131 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0130 20:00:07.559367   28131 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0130 20:00:07.559372   28131 command_runner.go:130] > # pause_command = "/pause"
	I0130 20:00:07.559378   28131 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0130 20:00:07.559383   28131 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0130 20:00:07.559389   28131 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0130 20:00:07.559395   28131 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0130 20:00:07.559400   28131 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0130 20:00:07.559404   28131 command_runner.go:130] > # signature_policy = ""
	I0130 20:00:07.559410   28131 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0130 20:00:07.559416   28131 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0130 20:00:07.559419   28131 command_runner.go:130] > # changing them here.
	I0130 20:00:07.559423   28131 command_runner.go:130] > # insecure_registries = [
	I0130 20:00:07.559426   28131 command_runner.go:130] > # ]
	I0130 20:00:07.559434   28131 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0130 20:00:07.559439   28131 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0130 20:00:07.559445   28131 command_runner.go:130] > # image_volumes = "mkdir"
	I0130 20:00:07.559450   28131 command_runner.go:130] > # Temporary directory to use for storing big files
	I0130 20:00:07.559454   28131 command_runner.go:130] > # big_files_temporary_dir = ""
	I0130 20:00:07.559460   28131 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0130 20:00:07.559464   28131 command_runner.go:130] > # CNI plugins.
	I0130 20:00:07.559467   28131 command_runner.go:130] > [crio.network]
	I0130 20:00:07.559473   28131 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0130 20:00:07.559478   28131 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0130 20:00:07.559482   28131 command_runner.go:130] > # cni_default_network = ""
	I0130 20:00:07.559487   28131 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0130 20:00:07.559492   28131 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0130 20:00:07.559497   28131 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0130 20:00:07.559501   28131 command_runner.go:130] > # plugin_dirs = [
	I0130 20:00:07.559505   28131 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0130 20:00:07.559508   28131 command_runner.go:130] > # ]
	I0130 20:00:07.559513   28131 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0130 20:00:07.559519   28131 command_runner.go:130] > [crio.metrics]
	I0130 20:00:07.559524   28131 command_runner.go:130] > # Globally enable or disable metrics support.
	I0130 20:00:07.559529   28131 command_runner.go:130] > enable_metrics = true
	I0130 20:00:07.559534   28131 command_runner.go:130] > # Specify enabled metrics collectors.
	I0130 20:00:07.559538   28131 command_runner.go:130] > # Per default all metrics are enabled.
	I0130 20:00:07.559544   28131 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0130 20:00:07.559550   28131 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0130 20:00:07.559559   28131 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0130 20:00:07.559562   28131 command_runner.go:130] > # metrics_collectors = [
	I0130 20:00:07.559566   28131 command_runner.go:130] > # 	"operations",
	I0130 20:00:07.559570   28131 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0130 20:00:07.559575   28131 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0130 20:00:07.559581   28131 command_runner.go:130] > # 	"operations_errors",
	I0130 20:00:07.559585   28131 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0130 20:00:07.559589   28131 command_runner.go:130] > # 	"image_pulls_by_name",
	I0130 20:00:07.559596   28131 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0130 20:00:07.559600   28131 command_runner.go:130] > # 	"image_pulls_failures",
	I0130 20:00:07.559605   28131 command_runner.go:130] > # 	"image_pulls_successes",
	I0130 20:00:07.559609   28131 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0130 20:00:07.559616   28131 command_runner.go:130] > # 	"image_layer_reuse",
	I0130 20:00:07.559622   28131 command_runner.go:130] > # 	"containers_oom_total",
	I0130 20:00:07.559629   28131 command_runner.go:130] > # 	"containers_oom",
	I0130 20:00:07.559633   28131 command_runner.go:130] > # 	"processes_defunct",
	I0130 20:00:07.559639   28131 command_runner.go:130] > # 	"operations_total",
	I0130 20:00:07.559644   28131 command_runner.go:130] > # 	"operations_latency_seconds",
	I0130 20:00:07.559651   28131 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0130 20:00:07.559655   28131 command_runner.go:130] > # 	"operations_errors_total",
	I0130 20:00:07.559662   28131 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0130 20:00:07.559666   28131 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0130 20:00:07.559672   28131 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0130 20:00:07.559676   28131 command_runner.go:130] > # 	"image_pulls_success_total",
	I0130 20:00:07.559680   28131 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0130 20:00:07.559687   28131 command_runner.go:130] > # 	"containers_oom_count_total",
	I0130 20:00:07.559690   28131 command_runner.go:130] > # ]
	I0130 20:00:07.559696   28131 command_runner.go:130] > # The port on which the metrics server will listen.
	I0130 20:00:07.559700   28131 command_runner.go:130] > # metrics_port = 9090
	I0130 20:00:07.559705   28131 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0130 20:00:07.559712   28131 command_runner.go:130] > # metrics_socket = ""
	I0130 20:00:07.559719   28131 command_runner.go:130] > # The certificate for the secure metrics server.
	I0130 20:00:07.559727   28131 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0130 20:00:07.559733   28131 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0130 20:00:07.559740   28131 command_runner.go:130] > # certificate on any modification event.
	I0130 20:00:07.559744   28131 command_runner.go:130] > # metrics_cert = ""
	I0130 20:00:07.559752   28131 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0130 20:00:07.559757   28131 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0130 20:00:07.559769   28131 command_runner.go:130] > # metrics_key = ""
	I0130 20:00:07.559777   28131 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0130 20:00:07.559781   28131 command_runner.go:130] > [crio.tracing]
	I0130 20:00:07.559786   28131 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0130 20:00:07.559793   28131 command_runner.go:130] > # enable_tracing = false
	I0130 20:00:07.559798   28131 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0130 20:00:07.559804   28131 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0130 20:00:07.559809   28131 command_runner.go:130] > # Number of samples to collect per million spans.
	I0130 20:00:07.559816   28131 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0130 20:00:07.559822   28131 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0130 20:00:07.559830   28131 command_runner.go:130] > [crio.stats]
	I0130 20:00:07.559838   28131 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0130 20:00:07.559850   28131 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0130 20:00:07.559856   28131 command_runner.go:130] > # stats_collection_period = 0
	I0130 20:00:07.559926   28131 cni.go:84] Creating CNI manager for ""
	I0130 20:00:07.559936   28131 cni.go:136] 3 nodes found, recommending kindnet
	I0130 20:00:07.559951   28131 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:00:07.559967   28131 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.186 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-572652 NodeName:multinode-572652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:00:07.560137   28131 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.186
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-572652"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:00:07.560230   28131 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-572652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-572652 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:00:07.560277   28131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 20:00:07.570423   28131 command_runner.go:130] > kubeadm
	I0130 20:00:07.570439   28131 command_runner.go:130] > kubectl
	I0130 20:00:07.570445   28131 command_runner.go:130] > kubelet
	I0130 20:00:07.570460   28131 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:00:07.570516   28131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:00:07.580560   28131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0130 20:00:07.598995   28131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:00:07.616061   28131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0130 20:00:07.632704   28131 ssh_runner.go:195] Run: grep 192.168.39.186	control-plane.minikube.internal$ /etc/hosts
	I0130 20:00:07.636150   28131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:00:07.648109   28131 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652 for IP: 192.168.39.186
	I0130 20:00:07.648136   28131 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:00:07.648297   28131 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:00:07.648339   28131 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:00:07.648401   28131 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/client.key
	I0130 20:00:07.648451   28131 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/apiserver.key.d0691019
	I0130 20:00:07.648502   28131 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/proxy-client.key
	I0130 20:00:07.648518   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0130 20:00:07.648533   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0130 20:00:07.648550   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0130 20:00:07.648563   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0130 20:00:07.648577   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0130 20:00:07.648591   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0130 20:00:07.648602   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0130 20:00:07.648614   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0130 20:00:07.648666   28131 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:00:07.648702   28131 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:00:07.648713   28131 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:00:07.648738   28131 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:00:07.648782   28131 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:00:07.648808   28131 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:00:07.648849   28131 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:00:07.648874   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:00:07.648889   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem -> /usr/share/ca-certificates/11667.pem
	I0130 20:00:07.648901   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> /usr/share/ca-certificates/116672.pem
	I0130 20:00:07.649391   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:00:07.672976   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 20:00:07.695960   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:00:07.718847   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 20:00:07.740604   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:00:07.762749   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:00:07.785544   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:00:07.808098   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:00:07.830420   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:00:07.852829   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:00:07.874055   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:00:07.895737   28131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:00:07.911145   28131 ssh_runner.go:195] Run: openssl version
	I0130 20:00:07.916612   28131 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0130 20:00:07.916664   28131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:00:07.926233   28131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:00:07.930774   28131 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:00:07.930797   28131 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:00:07.930836   28131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:00:07.935853   28131 command_runner.go:130] > 51391683
	I0130 20:00:07.936040   28131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:00:07.945503   28131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:00:07.956653   28131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:00:07.961139   28131 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:00:07.961160   28131 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:00:07.961205   28131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:00:07.966515   28131 command_runner.go:130] > 3ec20f2e
	I0130 20:00:07.967149   28131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:00:07.977506   28131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:00:07.988590   28131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:00:07.993070   28131 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:00:07.993251   28131 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:00:07.993314   28131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:00:07.998731   28131 command_runner.go:130] > b5213941
	I0130 20:00:07.998789   28131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:00:08.008674   28131 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:00:08.013410   28131 command_runner.go:130] > ca.crt
	I0130 20:00:08.013427   28131 command_runner.go:130] > ca.key
	I0130 20:00:08.013435   28131 command_runner.go:130] > healthcheck-client.crt
	I0130 20:00:08.013442   28131 command_runner.go:130] > healthcheck-client.key
	I0130 20:00:08.013449   28131 command_runner.go:130] > peer.crt
	I0130 20:00:08.013456   28131 command_runner.go:130] > peer.key
	I0130 20:00:08.013461   28131 command_runner.go:130] > server.crt
	I0130 20:00:08.013468   28131 command_runner.go:130] > server.key
	I0130 20:00:08.013521   28131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:00:08.019107   28131 command_runner.go:130] > Certificate will not expire
	I0130 20:00:08.019159   28131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:00:08.024621   28131 command_runner.go:130] > Certificate will not expire
	I0130 20:00:08.024902   28131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:00:08.031365   28131 command_runner.go:130] > Certificate will not expire
	I0130 20:00:08.031615   28131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:00:08.037282   28131 command_runner.go:130] > Certificate will not expire
	I0130 20:00:08.037327   28131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:00:08.042597   28131 command_runner.go:130] > Certificate will not expire
	I0130 20:00:08.042924   28131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:00:08.048687   28131 command_runner.go:130] > Certificate will not expire
	I0130 20:00:08.048902   28131 kubeadm.go:404] StartCluster: {Name:multinode-572652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-572652 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:00:08.049011   28131 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:00:08.049062   28131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:00:08.095827   28131 cri.go:89] found id: ""
	I0130 20:00:08.095900   28131 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:00:08.107708   28131 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0130 20:00:08.107735   28131 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0130 20:00:08.107744   28131 command_runner.go:130] > /var/lib/minikube/etcd:
	I0130 20:00:08.107750   28131 command_runner.go:130] > member
	I0130 20:00:08.107772   28131 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:00:08.107786   28131 kubeadm.go:636] restartCluster start
	I0130 20:00:08.107837   28131 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:00:08.117048   28131 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:08.117618   28131 kubeconfig.go:92] found "multinode-572652" server: "https://192.168.39.186:8443"
	I0130 20:00:08.118012   28131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:00:08.118287   28131 kapi.go:59] client config for multinode-572652: &rest.Config{Host:"https://192.168.39.186:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/client.crt", KeyFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/client.key", CAFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 20:00:08.118798   28131 cert_rotation.go:137] Starting client certificate rotation controller
	I0130 20:00:08.118953   28131 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:00:08.128380   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:08.128438   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:08.140498   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:08.629098   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:08.629188   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:08.641493   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:09.129139   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:09.129207   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:09.141659   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:09.629304   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:09.629383   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:09.641632   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:10.129237   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:10.129304   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:10.141320   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:10.628886   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:10.628961   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:10.641002   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:11.128632   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:11.128707   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:11.140636   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:11.628692   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:11.628755   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:11.640816   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:12.129453   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:12.129529   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:12.141272   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:12.628819   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:12.628885   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:12.640904   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:13.128505   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:13.128621   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:13.141112   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:13.628637   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:13.628708   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:13.641054   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:14.128627   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:14.128732   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:14.141109   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:14.628660   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:14.628733   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:14.640900   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:15.128444   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:15.128538   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:15.140490   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:15.629122   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:15.629212   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:15.641159   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:16.128731   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:16.128816   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:16.140954   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:16.628624   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:16.628696   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:16.640594   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:17.129241   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:17.129351   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:17.141260   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:17.628806   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:17.628885   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:17.641095   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:18.128926   28131 api_server.go:166] Checking apiserver status ...
	I0130 20:00:18.129031   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:00:18.141058   28131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:00:18.141082   28131 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:00:18.141091   28131 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:00:18.141106   28131 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:00:18.141163   28131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:00:18.176701   28131 cri.go:89] found id: ""
	I0130 20:00:18.176765   28131 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:00:18.192298   28131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:00:18.201217   28131 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0130 20:00:18.201235   28131 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0130 20:00:18.201248   28131 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0130 20:00:18.201256   28131 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:00:18.201440   28131 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:00:18.201507   28131 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:00:18.210241   28131 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:00:18.210259   28131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:00:18.310115   28131 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 20:00:18.310593   28131 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0130 20:00:18.311165   28131 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0130 20:00:18.311698   28131 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 20:00:18.312383   28131 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0130 20:00:18.312837   28131 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0130 20:00:18.313657   28131 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0130 20:00:18.314171   28131 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0130 20:00:18.314606   28131 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0130 20:00:18.315175   28131 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 20:00:18.315650   28131 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 20:00:18.316333   28131 command_runner.go:130] > [certs] Using the existing "sa" key
	I0130 20:00:18.317762   28131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:00:19.019843   28131 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 20:00:19.019865   28131 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 20:00:19.019871   28131 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 20:00:19.019876   28131 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 20:00:19.019883   28131 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 20:00:19.020126   28131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:00:19.221648   28131 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 20:00:19.221681   28131 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 20:00:19.221690   28131 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0130 20:00:19.221718   28131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:00:19.292222   28131 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 20:00:19.292247   28131 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 20:00:19.292256   28131 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 20:00:19.292268   28131 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 20:00:19.292352   28131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:00:19.359421   28131 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 20:00:19.359457   28131 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:00:19.359516   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:00:19.860291   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:00:20.359936   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:00:20.860645   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:00:21.360007   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:00:21.860262   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:00:21.885117   28131 command_runner.go:130] > 1095
	I0130 20:00:21.885158   28131 api_server.go:72] duration metric: took 2.525698919s to wait for apiserver process to appear ...
	I0130 20:00:21.885171   28131 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:00:21.885192   28131 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0130 20:00:25.747822   28131 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:00:25.747857   28131 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:00:25.747889   28131 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0130 20:00:25.807154   28131 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:00:25.807185   28131 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:00:25.885284   28131 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0130 20:00:25.891391   28131 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:00:25.891422   28131 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:00:26.386023   28131 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0130 20:00:26.390941   28131 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:00:26.390968   28131 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:00:26.885516   28131 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0130 20:00:26.895011   28131 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:00:26.895037   28131 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:00:27.385614   28131 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0130 20:00:27.392705   28131 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I0130 20:00:27.392779   28131 round_trippers.go:463] GET https://192.168.39.186:8443/version
	I0130 20:00:27.392785   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:27.392793   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:27.392803   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:27.400410   28131 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0130 20:00:27.400435   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:27.400461   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:27.400471   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:27.400483   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:27.400491   28131 round_trippers.go:580]     Content-Length: 264
	I0130 20:00:27.400530   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:27 GMT
	I0130 20:00:27.400543   28131 round_trippers.go:580]     Audit-Id: 01b8e4fe-c3c0-4c46-9003-cef159f989a2
	I0130 20:00:27.400551   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:27.400753   28131 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0130 20:00:27.400839   28131 api_server.go:141] control plane version: v1.28.4
	I0130 20:00:27.400862   28131 api_server.go:131] duration metric: took 5.515681743s to wait for apiserver health ...
	I0130 20:00:27.400870   28131 cni.go:84] Creating CNI manager for ""
	I0130 20:00:27.400878   28131 cni.go:136] 3 nodes found, recommending kindnet
	I0130 20:00:27.402332   28131 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0130 20:00:27.403662   28131 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0130 20:00:27.418267   28131 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0130 20:00:27.418289   28131 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0130 20:00:27.418295   28131 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0130 20:00:27.418302   28131 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0130 20:00:27.418308   28131 command_runner.go:130] > Access: 2024-01-30 19:59:52.571662116 +0000
	I0130 20:00:27.418313   28131 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0130 20:00:27.418318   28131 command_runner.go:130] > Change: 2024-01-30 19:59:50.660662116 +0000
	I0130 20:00:27.418322   28131 command_runner.go:130] >  Birth: -
	I0130 20:00:27.418903   28131 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0130 20:00:27.418922   28131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0130 20:00:27.456759   28131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0130 20:00:28.682979   28131 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0130 20:00:28.683004   28131 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0130 20:00:28.683014   28131 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0130 20:00:28.683032   28131 command_runner.go:130] > daemonset.apps/kindnet configured
	I0130 20:00:28.683326   28131 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.226533099s)
	I0130 20:00:28.683351   28131 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:00:28.683443   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods
	I0130 20:00:28.683455   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:28.683464   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:28.683474   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:28.686985   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:00:28.687009   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:28.687019   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:28 GMT
	I0130 20:00:28.687025   28131 round_trippers.go:580]     Audit-Id: a182d088-76ee-4f81-bf6c-468e4b828ad2
	I0130 20:00:28.687030   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:28.687035   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:28.687040   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:28.687050   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:28.689126   28131 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"778"},"items":[{"metadata":{"name":"coredns-5dd5756b68-579fc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8ed4a94c-417c-480d-9f9a-4101a5103066","resourceVersion":"765","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"39fdf010-d57e-4327-975b-6a5e640212c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fdf010-d57e-4327-975b-6a5e640212c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82638 chars]
	I0130 20:00:28.693349   28131 system_pods.go:59] 12 kube-system pods found
	I0130 20:00:28.693375   28131 system_pods.go:61] "coredns-5dd5756b68-579fc" [8ed4a94c-417c-480d-9f9a-4101a5103066] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:00:28.693381   28131 system_pods.go:61] "etcd-multinode-572652" [e44ed93f-1c85-4d27-bacb-f454d6eaa0b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 20:00:28.693388   28131 system_pods.go:61] "kindnet-rzx54" [87aab713-13c1-4fd2-bc90-73b2998226dc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0130 20:00:28.693393   28131 system_pods.go:61] "kindnet-srbck" [dd92c807-033f-496a-bff0-004577831a5c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0130 20:00:28.693398   28131 system_pods.go:61] "kindnet-w5jvc" [b629bb0f-d26e-4db0-9776-0e5e400dc7d7] Running
	I0130 20:00:28.693404   28131 system_pods.go:61] "kube-apiserver-multinode-572652" [fc451607-277c-45fe-a0f9-a3502db0251b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 20:00:28.693412   28131 system_pods.go:61] "kube-controller-manager-multinode-572652" [ce85a6a9-3600-41a9-824a-d01c009aead2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 20:00:28.693421   28131 system_pods.go:61] "kube-proxy-hx9f7" [95d8777b-0e61-4662-a7a6-1fb5e7b4ae29] Running
	I0130 20:00:28.693425   28131 system_pods.go:61] "kube-proxy-j5sr4" [d6bacfbc-c1e8-4dd2-bd48-778725887a72] Running
	I0130 20:00:28.693431   28131 system_pods.go:61] "kube-proxy-rbwvp" [2cd3c663-bf55-49b2-9120-101ac59912fd] Running
	I0130 20:00:28.693436   28131 system_pods.go:61] "kube-scheduler-multinode-572652" [ee4d8608-40cb-4281-ac1f-bc5ac41ff27d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 20:00:28.693443   28131 system_pods.go:61] "storage-provisioner" [a1eb366d-4b7c-4900-9e2e-83ebcee3d015] Running
	I0130 20:00:28.693462   28131 system_pods.go:74] duration metric: took 10.105111ms to wait for pod list to return data ...
	I0130 20:00:28.693468   28131 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:00:28.693518   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes
	I0130 20:00:28.693525   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:28.693532   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:28.693538   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:28.696237   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:28.696256   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:28.696266   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:28.696275   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:28 GMT
	I0130 20:00:28.696282   28131 round_trippers.go:580]     Audit-Id: e1923c54-ac8b-471a-84db-c78b651e8a3e
	I0130 20:00:28.696289   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:28.696296   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:28.696308   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:28.696616   28131 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"778"},"items":[{"metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"732","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16475 chars]
	I0130 20:00:28.697447   28131 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:00:28.697473   28131 node_conditions.go:123] node cpu capacity is 2
	I0130 20:00:28.697483   28131 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:00:28.697487   28131 node_conditions.go:123] node cpu capacity is 2
	I0130 20:00:28.697491   28131 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:00:28.697495   28131 node_conditions.go:123] node cpu capacity is 2
	I0130 20:00:28.697498   28131 node_conditions.go:105] duration metric: took 4.026731ms to run NodePressure ...
	I0130 20:00:28.697512   28131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:00:28.922433   28131 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0130 20:00:28.922466   28131 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0130 20:00:28.922497   28131 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:00:28.922602   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0130 20:00:28.922617   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:28.922627   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:28.922639   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:28.930632   28131 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0130 20:00:28.930653   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:28.930660   28131 round_trippers.go:580]     Audit-Id: dc63acd9-c677-4227-a7e9-7eec5470bd7a
	I0130 20:00:28.930668   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:28.930676   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:28.930684   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:28.930693   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:28.930699   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:28 GMT
	I0130 20:00:28.931328   28131 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"780"},"items":[{"metadata":{"name":"etcd-multinode-572652","namespace":"kube-system","uid":"e44ed93f-1c85-4d27-bacb-f454d6eaa0b6","resourceVersion":"767","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.186:2379","kubernetes.io/config.hash":"3d195cc1c68274636debff677374c054","kubernetes.io/config.mirror":"3d195cc1c68274636debff677374c054","kubernetes.io/config.seen":"2024-01-30T19:50:00.428284843Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I0130 20:00:28.932680   28131 kubeadm.go:787] kubelet initialised
	I0130 20:00:28.932700   28131 kubeadm.go:788] duration metric: took 10.191735ms waiting for restarted kubelet to initialise ...
	I0130 20:00:28.932707   28131 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:00:28.932767   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods
	I0130 20:00:28.932777   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:28.932785   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:28.932792   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:28.935978   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:00:28.936002   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:28.936011   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:28.936027   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:28.936035   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:28.936041   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:28.936049   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:28 GMT
	I0130 20:00:28.936054   28131 round_trippers.go:580]     Audit-Id: 4f1b0b97-1433-496a-a6d5-ac6756109cf0
	I0130 20:00:28.937311   28131 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"780"},"items":[{"metadata":{"name":"coredns-5dd5756b68-579fc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8ed4a94c-417c-480d-9f9a-4101a5103066","resourceVersion":"765","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"39fdf010-d57e-4327-975b-6a5e640212c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fdf010-d57e-4327-975b-6a5e640212c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82638 chars]
	I0130 20:00:28.939806   28131 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-579fc" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:28.939892   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-579fc
	I0130 20:00:28.939903   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:28.939913   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:28.939923   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:28.942039   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:28.942060   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:28.942066   28131 round_trippers.go:580]     Audit-Id: 5f2fffe5-4793-43f0-a840-76c639a25dfd
	I0130 20:00:28.942072   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:28.942076   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:28.942082   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:28.942087   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:28.942092   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:28 GMT
	I0130 20:00:28.942382   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-579fc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8ed4a94c-417c-480d-9f9a-4101a5103066","resourceVersion":"765","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"39fdf010-d57e-4327-975b-6a5e640212c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fdf010-d57e-4327-975b-6a5e640212c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 20:00:28.942751   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:28.942762   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:28.942769   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:28.942775   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:28.945061   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:28.945075   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:28.945084   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:28.945093   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:28 GMT
	I0130 20:00:28.945102   28131 round_trippers.go:580]     Audit-Id: d71b6258-b4e8-4a35-bd8f-d92b9ddd1120
	I0130 20:00:28.945110   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:28.945118   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:28.945123   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:28.945448   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"732","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 20:00:28.945794   28131 pod_ready.go:97] node "multinode-572652" hosting pod "coredns-5dd5756b68-579fc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-572652" has status "Ready":"False"
	I0130 20:00:28.945814   28131 pod_ready.go:81] duration metric: took 5.989872ms waiting for pod "coredns-5dd5756b68-579fc" in "kube-system" namespace to be "Ready" ...
	E0130 20:00:28.945825   28131 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-572652" hosting pod "coredns-5dd5756b68-579fc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-572652" has status "Ready":"False"
	I0130 20:00:28.945832   28131 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:28.945876   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-572652
	I0130 20:00:28.945884   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:28.945891   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:28.945897   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:28.947994   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:28.948007   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:28.948017   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:28.948023   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:28 GMT
	I0130 20:00:28.948030   28131 round_trippers.go:580]     Audit-Id: f925beb7-5465-4a1e-8e2f-c477adfb5824
	I0130 20:00:28.948035   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:28.948040   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:28.948045   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:28.948179   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-572652","namespace":"kube-system","uid":"e44ed93f-1c85-4d27-bacb-f454d6eaa0b6","resourceVersion":"767","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.186:2379","kubernetes.io/config.hash":"3d195cc1c68274636debff677374c054","kubernetes.io/config.mirror":"3d195cc1c68274636debff677374c054","kubernetes.io/config.seen":"2024-01-30T19:50:00.428284843Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0130 20:00:28.948590   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:28.948606   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:28.948622   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:28.948633   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:28.950414   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:00:28.950428   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:28.950433   28131 round_trippers.go:580]     Audit-Id: f2fe15fe-5bcb-4158-be05-b4aef75cc394
	I0130 20:00:28.950439   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:28.950444   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:28.950448   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:28.950456   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:28.950462   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:28 GMT
	I0130 20:00:28.950563   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"732","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 20:00:28.950842   28131 pod_ready.go:97] node "multinode-572652" hosting pod "etcd-multinode-572652" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-572652" has status "Ready":"False"
	I0130 20:00:28.950857   28131 pod_ready.go:81] duration metric: took 5.019319ms waiting for pod "etcd-multinode-572652" in "kube-system" namespace to be "Ready" ...
	E0130 20:00:28.950865   28131 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-572652" hosting pod "etcd-multinode-572652" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-572652" has status "Ready":"False"
	I0130 20:00:28.950877   28131 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:28.950912   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-572652
	I0130 20:00:28.950919   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:28.950925   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:28.950933   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:28.952858   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:00:28.952870   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:28.952876   28131 round_trippers.go:580]     Audit-Id: 9698753e-2235-4c17-8728-c93575cfea7c
	I0130 20:00:28.952881   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:28.952886   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:28.952891   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:28.952896   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:28.952911   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:28 GMT
	I0130 20:00:28.953118   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-572652","namespace":"kube-system","uid":"fc451607-277c-45fe-a0f9-a3502db0251b","resourceVersion":"768","creationTimestamp":"2024-01-30T19:49:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.186:8443","kubernetes.io/config.hash":"d6f18dcbbdea790709196864d2f77f8b","kubernetes.io/config.mirror":"d6f18dcbbdea790709196864d2f77f8b","kubernetes.io/config.seen":"2024-01-30T19:49:51.352745901Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:49:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0130 20:00:28.953547   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:28.953561   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:28.953572   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:28.953582   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:28.955478   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:00:28.955492   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:28.955498   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:28.955503   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:28.955508   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:28 GMT
	I0130 20:00:28.955513   28131 round_trippers.go:580]     Audit-Id: 3d8a7dea-fb2f-499e-933b-4e784b65f7af
	I0130 20:00:28.955517   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:28.955525   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:28.955636   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"732","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 20:00:28.956024   28131 pod_ready.go:97] node "multinode-572652" hosting pod "kube-apiserver-multinode-572652" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-572652" has status "Ready":"False"
	I0130 20:00:28.956047   28131 pod_ready.go:81] duration metric: took 5.163669ms waiting for pod "kube-apiserver-multinode-572652" in "kube-system" namespace to be "Ready" ...
	E0130 20:00:28.956056   28131 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-572652" hosting pod "kube-apiserver-multinode-572652" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-572652" has status "Ready":"False"
	I0130 20:00:28.956063   28131 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:28.956122   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-572652
	I0130 20:00:28.956133   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:28.956143   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:28.956151   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:28.958141   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:00:28.958162   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:28.958175   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:28 GMT
	I0130 20:00:28.958186   28131 round_trippers.go:580]     Audit-Id: def6f52d-c30a-49ae-bcdd-2ac23595965a
	I0130 20:00:28.958195   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:28.958207   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:28.958216   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:28.958229   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:28.958394   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-572652","namespace":"kube-system","uid":"ce85a6a9-3600-41a9-824a-d01c009aead2","resourceVersion":"769","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c7787439db55e175a329eec0f92a7a11","kubernetes.io/config.mirror":"c7787439db55e175a329eec0f92a7a11","kubernetes.io/config.seen":"2024-01-30T19:50:00.428289181Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0130 20:00:29.084128   28131 request.go:629] Waited for 125.290619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:29.084211   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:29.084224   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:29.084235   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:29.084248   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:29.087112   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:29.087131   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:29.087138   28131 round_trippers.go:580]     Audit-Id: 60031c1f-28f5-46f0-912d-eb99ca1b12c7
	I0130 20:00:29.087143   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:29.087149   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:29.087155   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:29.087163   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:29.087168   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:29 GMT
	I0130 20:00:29.087334   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"732","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 20:00:29.087667   28131 pod_ready.go:97] node "multinode-572652" hosting pod "kube-controller-manager-multinode-572652" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-572652" has status "Ready":"False"
	I0130 20:00:29.087692   28131 pod_ready.go:81] duration metric: took 131.616ms waiting for pod "kube-controller-manager-multinode-572652" in "kube-system" namespace to be "Ready" ...
	E0130 20:00:29.087707   28131 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-572652" hosting pod "kube-controller-manager-multinode-572652" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-572652" has status "Ready":"False"
	I0130 20:00:29.087713   28131 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hx9f7" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:29.284156   28131 request.go:629] Waited for 196.366446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hx9f7
	I0130 20:00:29.284232   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hx9f7
	I0130 20:00:29.284244   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:29.284254   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:29.284267   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:29.287128   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:29.287153   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:29.287163   28131 round_trippers.go:580]     Audit-Id: adcc4dc0-0ceb-42dd-854f-b237df535649
	I0130 20:00:29.287171   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:29.287179   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:29.287185   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:29.287192   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:29.287203   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:29 GMT
	I0130 20:00:29.287473   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hx9f7","generateName":"kube-proxy-","namespace":"kube-system","uid":"95d8777b-0e61-4662-a7a6-1fb5e7b4ae29","resourceVersion":"773","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1e1c3365-a3ba-434b-96dd-44f8afef011c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e1c3365-a3ba-434b-96dd-44f8afef011c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0130 20:00:29.483996   28131 request.go:629] Waited for 196.110254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:29.484151   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:29.484164   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:29.484172   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:29.484177   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:29.486451   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:29.486467   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:29.486477   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:29.486486   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:29.486501   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:29.486510   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:29.486519   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:29 GMT
	I0130 20:00:29.486532   28131 round_trippers.go:580]     Audit-Id: ce2ed098-efee-47f2-820f-cbdb0699c554
	I0130 20:00:29.486689   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"732","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 20:00:29.487126   28131 pod_ready.go:97] node "multinode-572652" hosting pod "kube-proxy-hx9f7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-572652" has status "Ready":"False"
	I0130 20:00:29.487153   28131 pod_ready.go:81] duration metric: took 399.429696ms waiting for pod "kube-proxy-hx9f7" in "kube-system" namespace to be "Ready" ...
	E0130 20:00:29.487165   28131 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-572652" hosting pod "kube-proxy-hx9f7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-572652" has status "Ready":"False"
	I0130 20:00:29.487174   28131 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j5sr4" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:29.684446   28131 request.go:629] Waited for 197.212537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5sr4
	I0130 20:00:29.684519   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5sr4
	I0130 20:00:29.684530   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:29.684543   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:29.684554   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:29.688506   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:00:29.688531   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:29.688541   28131 round_trippers.go:580]     Audit-Id: 04a349b7-7522-4523-aad3-a6573ab25bfa
	I0130 20:00:29.688549   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:29.688557   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:29.688566   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:29.688575   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:29.688592   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:29 GMT
	I0130 20:00:29.688745   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j5sr4","generateName":"kube-proxy-","namespace":"kube-system","uid":"d6bacfbc-c1e8-4dd2-bd48-778725887a72","resourceVersion":"699","creationTimestamp":"2024-01-30T19:51:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1e1c3365-a3ba-434b-96dd-44f8afef011c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e1c3365-a3ba-434b-96dd-44f8afef011c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0130 20:00:29.884419   28131 request.go:629] Waited for 195.161489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m03
	I0130 20:00:29.884513   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m03
	I0130 20:00:29.884528   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:29.884538   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:29.884551   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:29.887085   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:29.887109   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:29.887118   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:29.887127   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:29.887135   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:29.887143   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:29.887151   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:29 GMT
	I0130 20:00:29.887162   28131 round_trippers.go:580]     Audit-Id: 3da0917d-9748-4683-b5a7-b7d7c24ec1fa
	I0130 20:00:29.887296   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652-m03","uid":"6e43dfc4-d01d-44de-b61c-e668bf1447ff","resourceVersion":"729","creationTimestamp":"2024-01-30T19:52:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T19_52_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:52:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 4084 chars]
	I0130 20:00:29.887672   28131 pod_ready.go:92] pod "kube-proxy-j5sr4" in "kube-system" namespace has status "Ready":"True"
	I0130 20:00:29.887692   28131 pod_ready.go:81] duration metric: took 400.509291ms waiting for pod "kube-proxy-j5sr4" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:29.887704   28131 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rbwvp" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:30.083474   28131 request.go:629] Waited for 195.707277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rbwvp
	I0130 20:00:30.083563   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rbwvp
	I0130 20:00:30.083572   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:30.083580   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:30.083587   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:30.086488   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:30.086514   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:30.086524   28131 round_trippers.go:580]     Audit-Id: f4f71eec-08cb-4c2e-9d4a-487897479af3
	I0130 20:00:30.086533   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:30.086544   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:30.086553   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:30.086560   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:30.086567   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:30 GMT
	I0130 20:00:30.086700   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rbwvp","generateName":"kube-proxy-","namespace":"kube-system","uid":"2cd3c663-bf55-49b2-9120-101ac59912fd","resourceVersion":"484","creationTimestamp":"2024-01-30T19:50:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1e1c3365-a3ba-434b-96dd-44f8afef011c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e1c3365-a3ba-434b-96dd-44f8afef011c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0130 20:00:30.283515   28131 request.go:629] Waited for 196.281815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m02
	I0130 20:00:30.283582   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m02
	I0130 20:00:30.283588   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:30.283598   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:30.283608   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:30.286367   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:30.286392   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:30.286402   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:30.286410   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:30 GMT
	I0130 20:00:30.286421   28131 round_trippers.go:580]     Audit-Id: 64273f20-a74a-4a50-8f92-f46e84d4265f
	I0130 20:00:30.286438   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:30.286450   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:30.286459   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:30.286608   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652-m02","uid":"dff06704-3844-4766-a722-a280b6a04c06","resourceVersion":"777","creationTimestamp":"2024-01-30T19:50:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T19_52_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4236 chars]
	I0130 20:00:30.286979   28131 pod_ready.go:92] pod "kube-proxy-rbwvp" in "kube-system" namespace has status "Ready":"True"
	I0130 20:00:30.287002   28131 pod_ready.go:81] duration metric: took 399.289778ms waiting for pod "kube-proxy-rbwvp" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:30.287020   28131 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:30.483919   28131 request.go:629] Waited for 196.825871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-572652
	I0130 20:00:30.484007   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-572652
	I0130 20:00:30.484017   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:30.484024   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:30.484030   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:30.486740   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:30.486763   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:30.486773   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:30 GMT
	I0130 20:00:30.486781   28131 round_trippers.go:580]     Audit-Id: a205f050-621c-4c10-a50b-fe273fc24a90
	I0130 20:00:30.486792   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:30.486801   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:30.486811   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:30.486822   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:30.487217   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-572652","namespace":"kube-system","uid":"ee4d8608-40cb-4281-ac1f-bc5ac41ff27d","resourceVersion":"762","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"85e85fa7283981ab3a029cbc7c4cbcc1","kubernetes.io/config.mirror":"85e85fa7283981ab3a029cbc7c4cbcc1","kubernetes.io/config.seen":"2024-01-30T19:50:00.428289879Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I0130 20:00:30.683973   28131 request.go:629] Waited for 196.347564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:30.684044   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:30.684052   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:30.684060   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:30.684067   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:30.689063   28131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0130 20:00:30.689090   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:30.689101   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:30.689109   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:30.689118   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:30.689126   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:30.689134   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:30 GMT
	I0130 20:00:30.689145   28131 round_trippers.go:580]     Audit-Id: 0c6f4971-2757-47c3-b8b9-a624e102f661
	I0130 20:00:30.689454   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"732","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 20:00:30.689748   28131 pod_ready.go:97] node "multinode-572652" hosting pod "kube-scheduler-multinode-572652" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-572652" has status "Ready":"False"
	I0130 20:00:30.689765   28131 pod_ready.go:81] duration metric: took 402.738217ms waiting for pod "kube-scheduler-multinode-572652" in "kube-system" namespace to be "Ready" ...
	E0130 20:00:30.689774   28131 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-572652" hosting pod "kube-scheduler-multinode-572652" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-572652" has status "Ready":"False"
	I0130 20:00:30.689780   28131 pod_ready.go:38] duration metric: took 1.757065546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:00:30.689796   28131 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:00:30.701285   28131 command_runner.go:130] > -16
	I0130 20:00:30.701309   28131 ops.go:34] apiserver oom_adj: -16
	I0130 20:00:30.701315   28131 kubeadm.go:640] restartCluster took 22.593523198s
	I0130 20:00:30.701321   28131 kubeadm.go:406] StartCluster complete in 22.652428929s
	I0130 20:00:30.701336   28131 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:00:30.701409   28131 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:00:30.701987   28131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:00:30.702185   28131 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:00:30.702273   28131 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:00:30.705252   28131 out.go:177] * Enabled addons: 
	I0130 20:00:30.702486   28131 config.go:182] Loaded profile config "multinode-572652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:00:30.702534   28131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:00:30.706790   28131 addons.go:505] enable addons completed in 4.515455ms: enabled=[]
	I0130 20:00:30.705663   28131 kapi.go:59] client config for multinode-572652: &rest.Config{Host:"https://192.168.39.186:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/client.crt", KeyFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/client.key", CAFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 20:00:30.707083   28131 round_trippers.go:463] GET https://192.168.39.186:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0130 20:00:30.707094   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:30.707101   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:30.707110   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:30.710346   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:00:30.710362   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:30.710368   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:30.710373   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:30.710378   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:30.710384   28131 round_trippers.go:580]     Content-Length: 291
	I0130 20:00:30.710389   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:30 GMT
	I0130 20:00:30.710397   28131 round_trippers.go:580]     Audit-Id: bc06ef03-d44f-4584-b279-73694fd09508
	I0130 20:00:30.710404   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:30.710533   28131 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2034a0c9-1da9-4b9e-a99f-a32637cca2aa","resourceVersion":"779","creationTimestamp":"2024-01-30T19:50:00Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0130 20:00:30.710700   28131 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-572652" context rescaled to 1 replicas
	I0130 20:00:30.710738   28131 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:00:30.712377   28131 out.go:177] * Verifying Kubernetes components...
	I0130 20:00:30.713561   28131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:00:30.812968   28131 command_runner.go:130] > apiVersion: v1
	I0130 20:00:30.812990   28131 command_runner.go:130] > data:
	I0130 20:00:30.812997   28131 command_runner.go:130] >   Corefile: |
	I0130 20:00:30.813002   28131 command_runner.go:130] >     .:53 {
	I0130 20:00:30.813007   28131 command_runner.go:130] >         log
	I0130 20:00:30.813013   28131 command_runner.go:130] >         errors
	I0130 20:00:30.813019   28131 command_runner.go:130] >         health {
	I0130 20:00:30.813025   28131 command_runner.go:130] >            lameduck 5s
	I0130 20:00:30.813031   28131 command_runner.go:130] >         }
	I0130 20:00:30.813041   28131 command_runner.go:130] >         ready
	I0130 20:00:30.813053   28131 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0130 20:00:30.813062   28131 command_runner.go:130] >            pods insecure
	I0130 20:00:30.813096   28131 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0130 20:00:30.813107   28131 command_runner.go:130] >            ttl 30
	I0130 20:00:30.813113   28131 command_runner.go:130] >         }
	I0130 20:00:30.813121   28131 command_runner.go:130] >         prometheus :9153
	I0130 20:00:30.813128   28131 command_runner.go:130] >         hosts {
	I0130 20:00:30.813139   28131 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0130 20:00:30.813148   28131 command_runner.go:130] >            fallthrough
	I0130 20:00:30.813155   28131 command_runner.go:130] >         }
	I0130 20:00:30.813165   28131 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0130 20:00:30.813177   28131 command_runner.go:130] >            max_concurrent 1000
	I0130 20:00:30.813184   28131 command_runner.go:130] >         }
	I0130 20:00:30.813192   28131 command_runner.go:130] >         cache 30
	I0130 20:00:30.813200   28131 command_runner.go:130] >         loop
	I0130 20:00:30.813211   28131 command_runner.go:130] >         reload
	I0130 20:00:30.813218   28131 command_runner.go:130] >         loadbalance
	I0130 20:00:30.813227   28131 command_runner.go:130] >     }
	I0130 20:00:30.813235   28131 command_runner.go:130] > kind: ConfigMap
	I0130 20:00:30.813242   28131 command_runner.go:130] > metadata:
	I0130 20:00:30.813254   28131 command_runner.go:130] >   creationTimestamp: "2024-01-30T19:50:00Z"
	I0130 20:00:30.813264   28131 command_runner.go:130] >   name: coredns
	I0130 20:00:30.813278   28131 command_runner.go:130] >   namespace: kube-system
	I0130 20:00:30.813289   28131 command_runner.go:130] >   resourceVersion: "367"
	I0130 20:00:30.813299   28131 command_runner.go:130] >   uid: 1ce91ba8-d9c8-4489-a6e7-f9d0329e709e
	I0130 20:00:30.815641   28131 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0130 20:00:30.815641   28131 node_ready.go:35] waiting up to 6m0s for node "multinode-572652" to be "Ready" ...
	I0130 20:00:30.884008   28131 request.go:629] Waited for 68.247113ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:30.884089   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:30.884103   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:30.884131   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:30.884145   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:30.888034   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:00:30.888053   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:30.888060   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:30.888065   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:30.888071   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:30 GMT
	I0130 20:00:30.888076   28131 round_trippers.go:580]     Audit-Id: ae7266de-d8fe-4d18-85d6-0886f520bc79
	I0130 20:00:30.888081   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:30.888089   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:30.888686   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"732","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 20:00:31.316291   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:31.316326   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:31.316337   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:31.316347   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:31.318825   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:31.318844   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:31.318850   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:31 GMT
	I0130 20:00:31.318856   28131 round_trippers.go:580]     Audit-Id: ae473ae9-bbba-4447-b35b-999ba4daba70
	I0130 20:00:31.318861   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:31.318867   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:31.318872   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:31.318879   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:31.319078   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"732","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 20:00:31.815845   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:31.815868   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:31.815876   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:31.815882   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:31.818598   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:31.818616   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:31.818623   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:31.818629   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:31.818636   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:31.818644   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:31 GMT
	I0130 20:00:31.818655   28131 round_trippers.go:580]     Audit-Id: b14557c2-f1cd-4f6a-b841-812d3cd209c8
	I0130 20:00:31.818663   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:31.819427   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"732","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 20:00:32.315918   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:32.315946   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:32.315955   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:32.315966   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:32.318648   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:32.318673   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:32.318683   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:32.318692   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:32.318700   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:32.318708   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:32 GMT
	I0130 20:00:32.318717   28131 round_trippers.go:580]     Audit-Id: 1eb54a70-28f1-4117-8260-f66809eb3d29
	I0130 20:00:32.318725   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:32.319638   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:32.319978   28131 node_ready.go:49] node "multinode-572652" has status "Ready":"True"
	I0130 20:00:32.319996   28131 node_ready.go:38] duration metric: took 1.504331815s waiting for node "multinode-572652" to be "Ready" ...
	I0130 20:00:32.320004   28131 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:00:32.320059   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods
	I0130 20:00:32.320067   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:32.320074   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:32.320080   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:32.326299   28131 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0130 20:00:32.326323   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:32.326346   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:32.326355   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:32 GMT
	I0130 20:00:32.326363   28131 round_trippers.go:580]     Audit-Id: e55a29aa-4123-4799-b183-ca16382aa9c3
	I0130 20:00:32.326372   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:32.326380   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:32.326388   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:32.329217   28131 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"844"},"items":[{"metadata":{"name":"coredns-5dd5756b68-579fc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8ed4a94c-417c-480d-9f9a-4101a5103066","resourceVersion":"765","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"39fdf010-d57e-4327-975b-6a5e640212c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fdf010-d57e-4327-975b-6a5e640212c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82957 chars]
	I0130 20:00:32.331744   28131 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-579fc" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:32.331837   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-579fc
	I0130 20:00:32.331848   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:32.331860   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:32.331866   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:32.334362   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:32.334381   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:32.334391   28131 round_trippers.go:580]     Audit-Id: ca4ae772-49c1-4d4e-9ec1-acc9315d9d55
	I0130 20:00:32.334400   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:32.334408   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:32.334416   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:32.334428   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:32.334439   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:32 GMT
	I0130 20:00:32.334589   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-579fc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8ed4a94c-417c-480d-9f9a-4101a5103066","resourceVersion":"765","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"39fdf010-d57e-4327-975b-6a5e640212c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fdf010-d57e-4327-975b-6a5e640212c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 20:00:32.335147   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:32.335166   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:32.335188   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:32.335202   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:32.337357   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:32.337378   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:32.337388   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:32.337396   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:32.337404   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:32.337413   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:32.337421   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:32 GMT
	I0130 20:00:32.337431   28131 round_trippers.go:580]     Audit-Id: 1f436368-773f-4796-8fed-bcb344578596
	I0130 20:00:32.337532   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:32.832181   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-579fc
	I0130 20:00:32.832207   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:32.832215   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:32.832221   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:32.834961   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:32.834985   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:32.834998   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:32.835005   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:32 GMT
	I0130 20:00:32.835012   28131 round_trippers.go:580]     Audit-Id: 2ed46596-786f-4821-9782-79c986bd0c74
	I0130 20:00:32.835020   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:32.835033   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:32.835047   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:32.835235   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-579fc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8ed4a94c-417c-480d-9f9a-4101a5103066","resourceVersion":"765","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"39fdf010-d57e-4327-975b-6a5e640212c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fdf010-d57e-4327-975b-6a5e640212c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 20:00:32.835764   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:32.835780   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:32.835787   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:32.835793   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:32.839351   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:00:32.839399   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:32.839417   28131 round_trippers.go:580]     Audit-Id: 64aef52f-9070-4b7b-a8ed-b3727c2e4843
	I0130 20:00:32.839436   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:32.839454   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:32.839481   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:32.839502   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:32.839521   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:32 GMT
	I0130 20:00:32.840566   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:33.332858   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-579fc
	I0130 20:00:33.332885   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:33.332894   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:33.332900   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:33.335732   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:33.335801   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:33.335813   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:33.335823   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:33.335831   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:33.335843   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:33.335855   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:33 GMT
	I0130 20:00:33.335864   28131 round_trippers.go:580]     Audit-Id: 0a42727a-2930-4568-8ac1-ab179a4cc96f
	I0130 20:00:33.336064   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-579fc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8ed4a94c-417c-480d-9f9a-4101a5103066","resourceVersion":"765","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"39fdf010-d57e-4327-975b-6a5e640212c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fdf010-d57e-4327-975b-6a5e640212c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 20:00:33.336619   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:33.336638   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:33.336649   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:33.336665   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:33.339144   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:33.339165   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:33.339175   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:33.339184   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:33.339193   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:33.339206   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:33.339213   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:33 GMT
	I0130 20:00:33.339219   28131 round_trippers.go:580]     Audit-Id: afcd222c-b5b1-446b-9774-25b6c784f2f6
	I0130 20:00:33.339541   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:33.832125   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-579fc
	I0130 20:00:33.832150   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:33.832161   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:33.832167   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:33.834992   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:33.835015   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:33.835025   28131 round_trippers.go:580]     Audit-Id: 13537fcf-cc0b-4f31-ba19-41feedabb40e
	I0130 20:00:33.835037   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:33.835043   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:33.835051   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:33.835056   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:33.835064   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:33 GMT
	I0130 20:00:33.835232   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-579fc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8ed4a94c-417c-480d-9f9a-4101a5103066","resourceVersion":"765","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"39fdf010-d57e-4327-975b-6a5e640212c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fdf010-d57e-4327-975b-6a5e640212c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 20:00:33.835715   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:33.835731   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:33.835741   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:33.835750   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:33.838149   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:33.838168   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:33.838177   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:33.838185   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:33.838194   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:33 GMT
	I0130 20:00:33.838202   28131 round_trippers.go:580]     Audit-Id: 11c69287-deec-4885-8c34-58695dad528e
	I0130 20:00:33.838211   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:33.838220   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:33.838473   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:34.332168   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-579fc
	I0130 20:00:34.332201   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:34.332213   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:34.332222   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:34.335799   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:00:34.335820   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:34.335830   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:34.335837   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:34 GMT
	I0130 20:00:34.335845   28131 round_trippers.go:580]     Audit-Id: 78a231fd-cf32-4265-8788-aeceac56c879
	I0130 20:00:34.335853   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:34.335862   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:34.335872   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:34.336168   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-579fc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8ed4a94c-417c-480d-9f9a-4101a5103066","resourceVersion":"765","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"39fdf010-d57e-4327-975b-6a5e640212c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fdf010-d57e-4327-975b-6a5e640212c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 20:00:34.336752   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:34.336772   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:34.336783   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:34.336800   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:34.338702   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:00:34.338720   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:34.338732   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:34.338739   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:34.338746   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:34.338754   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:34.338761   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:34 GMT
	I0130 20:00:34.338773   28131 round_trippers.go:580]     Audit-Id: b44aefd9-9674-4c83-ad79-829f6a656231
	I0130 20:00:34.339109   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:34.339420   28131 pod_ready.go:102] pod "coredns-5dd5756b68-579fc" in "kube-system" namespace has status "Ready":"False"
	I0130 20:00:34.832198   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-579fc
	I0130 20:00:34.832220   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:34.832230   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:34.832239   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:34.835754   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:00:34.835779   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:34.835790   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:34.835798   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:34.835810   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:34.835819   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:34.835829   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:34 GMT
	I0130 20:00:34.835841   28131 round_trippers.go:580]     Audit-Id: 4353671b-0fdf-47b8-b7cb-79a804695063
	I0130 20:00:34.836028   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-579fc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8ed4a94c-417c-480d-9f9a-4101a5103066","resourceVersion":"765","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"39fdf010-d57e-4327-975b-6a5e640212c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fdf010-d57e-4327-975b-6a5e640212c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 20:00:34.836469   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:34.836484   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:34.836494   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:34.836503   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:34.838513   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:00:34.838533   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:34.838542   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:34.838550   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:34.838559   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:34.838578   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:34 GMT
	I0130 20:00:34.838587   28131 round_trippers.go:580]     Audit-Id: a7d90322-f90b-4c1f-8558-c290c39a4058
	I0130 20:00:34.838595   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:34.838896   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:35.332574   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-579fc
	I0130 20:00:35.332604   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:35.332614   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:35.332623   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:35.337855   28131 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0130 20:00:35.337874   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:35.337880   28131 round_trippers.go:580]     Audit-Id: 6ac7b13b-f031-48b9-9adf-94797fb84188
	I0130 20:00:35.337887   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:35.337895   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:35.337921   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:35.337944   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:35.337953   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:35 GMT
	I0130 20:00:35.338240   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-579fc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8ed4a94c-417c-480d-9f9a-4101a5103066","resourceVersion":"765","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"39fdf010-d57e-4327-975b-6a5e640212c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fdf010-d57e-4327-975b-6a5e640212c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 20:00:35.338689   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:35.338705   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:35.338716   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:35.338731   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:35.340437   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:00:35.340451   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:35.340457   28131 round_trippers.go:580]     Audit-Id: bff7fa25-8c6a-45ea-a609-f020afe8471a
	I0130 20:00:35.340462   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:35.340467   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:35.340472   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:35.340478   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:35.340491   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:35 GMT
	I0130 20:00:35.340855   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:35.832579   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-579fc
	I0130 20:00:35.832604   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:35.832612   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:35.832618   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:35.835317   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:35.835336   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:35.835342   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:35.835348   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:35 GMT
	I0130 20:00:35.835353   28131 round_trippers.go:580]     Audit-Id: 3fe1845b-3dd4-4f6b-9ffa-4eeb1a669dae
	I0130 20:00:35.835358   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:35.835363   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:35.835368   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:35.835759   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-579fc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8ed4a94c-417c-480d-9f9a-4101a5103066","resourceVersion":"850","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"39fdf010-d57e-4327-975b-6a5e640212c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fdf010-d57e-4327-975b-6a5e640212c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0130 20:00:35.836237   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:35.836258   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:35.836265   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:35.836271   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:35.838531   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:35.838547   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:35.838553   28131 round_trippers.go:580]     Audit-Id: 577cb04d-9338-4d8f-b4c6-c8af37b2a102
	I0130 20:00:35.838558   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:35.838572   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:35.838580   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:35.838588   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:35.838648   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:35 GMT
	I0130 20:00:35.839334   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:35.839621   28131 pod_ready.go:92] pod "coredns-5dd5756b68-579fc" in "kube-system" namespace has status "Ready":"True"
	I0130 20:00:35.839635   28131 pod_ready.go:81] duration metric: took 3.507869488s waiting for pod "coredns-5dd5756b68-579fc" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:35.839644   28131 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:35.839688   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-572652
	I0130 20:00:35.839695   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:35.839702   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:35.839708   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:35.841890   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:35.841910   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:35.841919   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:35.841927   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:35 GMT
	I0130 20:00:35.841935   28131 round_trippers.go:580]     Audit-Id: c26b7963-7bbb-474d-a549-f8ad1ad5b7df
	I0130 20:00:35.841943   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:35.841958   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:35.841973   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:35.842405   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-572652","namespace":"kube-system","uid":"e44ed93f-1c85-4d27-bacb-f454d6eaa0b6","resourceVersion":"767","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.186:2379","kubernetes.io/config.hash":"3d195cc1c68274636debff677374c054","kubernetes.io/config.mirror":"3d195cc1c68274636debff677374c054","kubernetes.io/config.seen":"2024-01-30T19:50:00.428284843Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0130 20:00:35.842883   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:35.842899   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:35.842907   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:35.842917   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:35.845094   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:35.845111   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:35.845126   28131 round_trippers.go:580]     Audit-Id: 27da5ae0-8ec7-4fff-8712-26aaf92bcf08
	I0130 20:00:35.845135   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:35.845143   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:35.845151   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:35.845159   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:35.845167   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:35 GMT
	I0130 20:00:35.845334   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:36.339854   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-572652
	I0130 20:00:36.339879   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:36.339887   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:36.339892   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:36.342795   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:36.342820   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:36.342830   28131 round_trippers.go:580]     Audit-Id: 61335e06-23f4-41af-ada5-7c937484ddc2
	I0130 20:00:36.342839   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:36.342854   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:36.342862   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:36.342870   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:36.342885   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:36 GMT
	I0130 20:00:36.343051   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-572652","namespace":"kube-system","uid":"e44ed93f-1c85-4d27-bacb-f454d6eaa0b6","resourceVersion":"767","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.186:2379","kubernetes.io/config.hash":"3d195cc1c68274636debff677374c054","kubernetes.io/config.mirror":"3d195cc1c68274636debff677374c054","kubernetes.io/config.seen":"2024-01-30T19:50:00.428284843Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0130 20:00:36.343600   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:36.343622   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:36.343632   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:36.343641   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:36.345712   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:36.345726   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:36.345732   28131 round_trippers.go:580]     Audit-Id: 4cbeb0d7-7e23-4efb-b095-6b2dfec53e33
	I0130 20:00:36.345737   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:36.345743   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:36.345751   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:36.345760   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:36.345774   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:36 GMT
	I0130 20:00:36.345968   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:36.839780   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-572652
	I0130 20:00:36.839803   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:36.839811   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:36.839817   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:36.842410   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:36.842441   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:36.842450   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:36.842455   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:36.842460   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:36 GMT
	I0130 20:00:36.842465   28131 round_trippers.go:580]     Audit-Id: 02f5a7cb-23d3-4042-b8f8-679891a4afc5
	I0130 20:00:36.842470   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:36.842475   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:36.842951   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-572652","namespace":"kube-system","uid":"e44ed93f-1c85-4d27-bacb-f454d6eaa0b6","resourceVersion":"767","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.186:2379","kubernetes.io/config.hash":"3d195cc1c68274636debff677374c054","kubernetes.io/config.mirror":"3d195cc1c68274636debff677374c054","kubernetes.io/config.seen":"2024-01-30T19:50:00.428284843Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0130 20:00:36.843404   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:36.843418   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:36.843425   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:36.843431   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:36.846775   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:00:36.846790   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:36.846797   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:36.846805   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:36.846810   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:36.846815   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:36 GMT
	I0130 20:00:36.846820   28131 round_trippers.go:580]     Audit-Id: 4cff9f3c-dd39-4e77-88d5-577a35e534e0
	I0130 20:00:36.846825   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:36.848067   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:37.340768   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-572652
	I0130 20:00:37.340791   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:37.340799   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:37.340805   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:37.343251   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:37.343283   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:37.343290   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:37.343296   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:37.343301   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:37 GMT
	I0130 20:00:37.343306   28131 round_trippers.go:580]     Audit-Id: f99f6387-25ab-43db-8f6d-ccbd89a8ebdf
	I0130 20:00:37.343311   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:37.343316   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:37.343623   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-572652","namespace":"kube-system","uid":"e44ed93f-1c85-4d27-bacb-f454d6eaa0b6","resourceVersion":"767","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.186:2379","kubernetes.io/config.hash":"3d195cc1c68274636debff677374c054","kubernetes.io/config.mirror":"3d195cc1c68274636debff677374c054","kubernetes.io/config.seen":"2024-01-30T19:50:00.428284843Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0130 20:00:37.343973   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:37.343984   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:37.343992   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:37.343997   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:37.345840   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:00:37.345856   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:37.345863   28131 round_trippers.go:580]     Audit-Id: 03e13582-aa36-4133-a76d-88bb2355ac71
	I0130 20:00:37.345868   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:37.345873   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:37.345880   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:37.345894   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:37.345903   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:37 GMT
	I0130 20:00:37.346151   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:37.840814   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-572652
	I0130 20:00:37.840837   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:37.840845   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:37.840851   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:37.843716   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:37.843737   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:37.843745   28131 round_trippers.go:580]     Audit-Id: 13a840b7-974b-4e36-a113-b1c4b8baa520
	I0130 20:00:37.843754   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:37.843770   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:37.843777   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:37.843782   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:37.843787   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:37 GMT
	I0130 20:00:37.844229   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-572652","namespace":"kube-system","uid":"e44ed93f-1c85-4d27-bacb-f454d6eaa0b6","resourceVersion":"857","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.186:2379","kubernetes.io/config.hash":"3d195cc1c68274636debff677374c054","kubernetes.io/config.mirror":"3d195cc1c68274636debff677374c054","kubernetes.io/config.seen":"2024-01-30T19:50:00.428284843Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0130 20:00:37.844575   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:37.844586   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:37.844593   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:37.844598   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:37.848241   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:00:37.848256   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:37.848262   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:37.848267   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:37.848273   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:37 GMT
	I0130 20:00:37.848278   28131 round_trippers.go:580]     Audit-Id: 0e30ad7f-296e-40b3-a21e-0c5018ea2e7a
	I0130 20:00:37.848283   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:37.848288   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:37.848499   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:37.848909   28131 pod_ready.go:92] pod "etcd-multinode-572652" in "kube-system" namespace has status "Ready":"True"
	I0130 20:00:37.848930   28131 pod_ready.go:81] duration metric: took 2.009279098s waiting for pod "etcd-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:37.848953   28131 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:37.849029   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-572652
	I0130 20:00:37.849039   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:37.849050   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:37.849059   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:37.851294   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:37.851308   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:37.851314   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:37.851319   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:37.851324   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:37.851332   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:37 GMT
	I0130 20:00:37.851338   28131 round_trippers.go:580]     Audit-Id: 6a1ab919-cc7e-4df9-947c-3af4b97b74f5
	I0130 20:00:37.851347   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:37.851605   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-572652","namespace":"kube-system","uid":"fc451607-277c-45fe-a0f9-a3502db0251b","resourceVersion":"768","creationTimestamp":"2024-01-30T19:49:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.186:8443","kubernetes.io/config.hash":"d6f18dcbbdea790709196864d2f77f8b","kubernetes.io/config.mirror":"d6f18dcbbdea790709196864d2f77f8b","kubernetes.io/config.seen":"2024-01-30T19:49:51.352745901Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:49:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0130 20:00:37.852108   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:37.852125   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:37.852135   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:37.852141   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:37.854660   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:37.854677   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:37.854684   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:37 GMT
	I0130 20:00:37.854689   28131 round_trippers.go:580]     Audit-Id: a9b78166-2678-4e2e-8d71-391c1e6e0a76
	I0130 20:00:37.854696   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:37.854702   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:37.854708   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:37.854714   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:37.855348   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:38.349102   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-572652
	I0130 20:00:38.349124   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:38.349132   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:38.349140   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:38.352164   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:00:38.352199   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:38.352207   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:38.352216   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:38.352224   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:38.352233   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:38.352247   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:38 GMT
	I0130 20:00:38.352262   28131 round_trippers.go:580]     Audit-Id: 4871aa4b-89aa-439b-9155-9b93180f5532
	I0130 20:00:38.352439   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-572652","namespace":"kube-system","uid":"fc451607-277c-45fe-a0f9-a3502db0251b","resourceVersion":"863","creationTimestamp":"2024-01-30T19:49:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.186:8443","kubernetes.io/config.hash":"d6f18dcbbdea790709196864d2f77f8b","kubernetes.io/config.mirror":"d6f18dcbbdea790709196864d2f77f8b","kubernetes.io/config.seen":"2024-01-30T19:49:51.352745901Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:49:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0130 20:00:38.352842   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:38.352858   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:38.352865   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:38.352871   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:38.355192   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:38.355212   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:38.355222   28131 round_trippers.go:580]     Audit-Id: 92a7d8b6-7af8-40ea-a4fe-cffb9014fa00
	I0130 20:00:38.355230   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:38.355241   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:38.355257   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:38.355281   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:38.355293   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:38 GMT
	I0130 20:00:38.355864   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:38.356155   28131 pod_ready.go:92] pod "kube-apiserver-multinode-572652" in "kube-system" namespace has status "Ready":"True"
	I0130 20:00:38.356171   28131 pod_ready.go:81] duration metric: took 507.207131ms waiting for pod "kube-apiserver-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:38.356179   28131 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:38.356222   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-572652
	I0130 20:00:38.356229   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:38.356236   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:38.356241   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:38.358080   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:00:38.358100   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:38.358109   28131 round_trippers.go:580]     Audit-Id: bf1c10ed-8adf-4849-b609-e21c4213cf22
	I0130 20:00:38.358118   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:38.358130   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:38.358142   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:38.358157   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:38.358165   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:38 GMT
	I0130 20:00:38.358319   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-572652","namespace":"kube-system","uid":"ce85a6a9-3600-41a9-824a-d01c009aead2","resourceVersion":"769","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c7787439db55e175a329eec0f92a7a11","kubernetes.io/config.mirror":"c7787439db55e175a329eec0f92a7a11","kubernetes.io/config.seen":"2024-01-30T19:50:00.428289181Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0130 20:00:38.358685   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:38.358698   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:38.358708   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:38.358714   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:38.360608   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:00:38.360630   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:38.360640   28131 round_trippers.go:580]     Audit-Id: d172da41-aff6-49c2-a059-cb7a1006239e
	I0130 20:00:38.360649   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:38.360655   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:38.360661   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:38.360666   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:38.360671   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:38 GMT
	I0130 20:00:38.360806   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:38.857420   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-572652
	I0130 20:00:38.857450   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:38.857462   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:38.857470   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:38.860427   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:38.860453   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:38.860464   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:38.860473   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:38.860496   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:38.860503   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:38 GMT
	I0130 20:00:38.860511   28131 round_trippers.go:580]     Audit-Id: bf1be6bd-a3d4-4808-9a93-a1dbb0408e25
	I0130 20:00:38.860519   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:38.860688   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-572652","namespace":"kube-system","uid":"ce85a6a9-3600-41a9-824a-d01c009aead2","resourceVersion":"769","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c7787439db55e175a329eec0f92a7a11","kubernetes.io/config.mirror":"c7787439db55e175a329eec0f92a7a11","kubernetes.io/config.seen":"2024-01-30T19:50:00.428289181Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0130 20:00:38.861122   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:38.861136   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:38.861143   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:38.861148   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:38.862856   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:00:38.862871   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:38.862877   28131 round_trippers.go:580]     Audit-Id: d4857201-4289-452d-98a7-a55fb6efcc37
	I0130 20:00:38.862885   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:38.862894   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:38.862908   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:38.862917   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:38.862925   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:38 GMT
	I0130 20:00:38.863302   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:39.357026   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-572652
	I0130 20:00:39.357057   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:39.357069   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:39.357078   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:39.360961   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:00:39.360988   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:39.360998   28131 round_trippers.go:580]     Audit-Id: 843a058e-8384-4709-8f89-fec830f3a2b6
	I0130 20:00:39.361008   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:39.361015   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:39.361027   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:39.361047   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:39.361056   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:39 GMT
	I0130 20:00:39.362164   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-572652","namespace":"kube-system","uid":"ce85a6a9-3600-41a9-824a-d01c009aead2","resourceVersion":"769","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c7787439db55e175a329eec0f92a7a11","kubernetes.io/config.mirror":"c7787439db55e175a329eec0f92a7a11","kubernetes.io/config.seen":"2024-01-30T19:50:00.428289181Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0130 20:00:39.362692   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:39.362707   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:39.362714   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:39.362720   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:39.367107   28131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0130 20:00:39.367127   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:39.367134   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:39 GMT
	I0130 20:00:39.367139   28131 round_trippers.go:580]     Audit-Id: e86ef981-e3ff-4b0b-8dda-f1b9646f30a3
	I0130 20:00:39.367144   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:39.367149   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:39.367156   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:39.367165   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:39.368045   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:39.857113   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-572652
	I0130 20:00:39.857134   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:39.857141   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:39.857147   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:39.859940   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:39.859977   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:39.859987   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:39 GMT
	I0130 20:00:39.859995   28131 round_trippers.go:580]     Audit-Id: 2e8f1fc4-897d-43bf-8274-315b7a5c8006
	I0130 20:00:39.860002   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:39.860016   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:39.860025   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:39.860033   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:39.860204   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-572652","namespace":"kube-system","uid":"ce85a6a9-3600-41a9-824a-d01c009aead2","resourceVersion":"769","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c7787439db55e175a329eec0f92a7a11","kubernetes.io/config.mirror":"c7787439db55e175a329eec0f92a7a11","kubernetes.io/config.seen":"2024-01-30T19:50:00.428289181Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0130 20:00:39.860643   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:39.860658   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:39.860665   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:39.860671   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:39.862880   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:39.862894   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:39.862900   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:39 GMT
	I0130 20:00:39.862905   28131 round_trippers.go:580]     Audit-Id: 108a2aa3-6cd2-4f63-b88b-8a7f9723aea4
	I0130 20:00:39.862910   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:39.862915   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:39.862933   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:39.862948   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:39.863295   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:40.356984   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-572652
	I0130 20:00:40.357015   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:40.357023   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:40.357028   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:40.359795   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:40.359823   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:40.359832   28131 round_trippers.go:580]     Audit-Id: d0db36c1-8d6c-4b60-aa20-4e1629dd6465
	I0130 20:00:40.359841   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:40.359850   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:40.359858   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:40.359866   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:40.359879   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:40 GMT
	I0130 20:00:40.360147   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-572652","namespace":"kube-system","uid":"ce85a6a9-3600-41a9-824a-d01c009aead2","resourceVersion":"769","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c7787439db55e175a329eec0f92a7a11","kubernetes.io/config.mirror":"c7787439db55e175a329eec0f92a7a11","kubernetes.io/config.seen":"2024-01-30T19:50:00.428289181Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0130 20:00:40.360563   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:40.360576   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:40.360585   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:40.360591   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:40.362886   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:40.362904   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:40.362913   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:40.362921   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:40 GMT
	I0130 20:00:40.362928   28131 round_trippers.go:580]     Audit-Id: 37dc539f-f2eb-409c-83f6-e74abf03ef74
	I0130 20:00:40.362933   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:40.362938   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:40.362943   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:40.363073   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:40.363359   28131 pod_ready.go:102] pod "kube-controller-manager-multinode-572652" in "kube-system" namespace has status "Ready":"False"
	I0130 20:00:40.856716   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-572652
	I0130 20:00:40.856738   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:40.856746   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:40.856752   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:40.859301   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:40.859321   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:40.859330   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:40.859338   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:40.859346   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:40.859355   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:40.859366   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:40 GMT
	I0130 20:00:40.859373   28131 round_trippers.go:580]     Audit-Id: ea225bb1-f7a1-44f1-b75c-f7df749a48d4
	I0130 20:00:40.859676   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-572652","namespace":"kube-system","uid":"ce85a6a9-3600-41a9-824a-d01c009aead2","resourceVersion":"877","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c7787439db55e175a329eec0f92a7a11","kubernetes.io/config.mirror":"c7787439db55e175a329eec0f92a7a11","kubernetes.io/config.seen":"2024-01-30T19:50:00.428289181Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0130 20:00:40.860191   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:40.860207   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:40.860218   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:40.860224   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:40.862680   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:40.862693   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:40.862699   28131 round_trippers.go:580]     Audit-Id: 72e3b39e-a705-4f7c-b0b3-b2874cd5b09d
	I0130 20:00:40.862711   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:40.862719   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:40.862724   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:40.862732   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:40.862739   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:40 GMT
	I0130 20:00:40.863161   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:40.863558   28131 pod_ready.go:92] pod "kube-controller-manager-multinode-572652" in "kube-system" namespace has status "Ready":"True"
	I0130 20:00:40.863578   28131 pod_ready.go:81] duration metric: took 2.507392309s waiting for pod "kube-controller-manager-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:40.863590   28131 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hx9f7" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:40.863660   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hx9f7
	I0130 20:00:40.863672   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:40.863682   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:40.863693   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:40.865513   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:00:40.865529   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:40.865546   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:40.865556   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:40.865568   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:40.865577   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:40.865582   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:40 GMT
	I0130 20:00:40.865590   28131 round_trippers.go:580]     Audit-Id: ed632b03-3b00-47fa-983a-cb192292e5ea
	I0130 20:00:40.865754   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hx9f7","generateName":"kube-proxy-","namespace":"kube-system","uid":"95d8777b-0e61-4662-a7a6-1fb5e7b4ae29","resourceVersion":"773","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1e1c3365-a3ba-434b-96dd-44f8afef011c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e1c3365-a3ba-434b-96dd-44f8afef011c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0130 20:00:40.866196   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:40.866210   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:40.866220   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:40.866230   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:40.868180   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:00:40.868197   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:40.868206   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:40 GMT
	I0130 20:00:40.868214   28131 round_trippers.go:580]     Audit-Id: 8288d6eb-8096-4bc8-8dbc-9cbdab33f255
	I0130 20:00:40.868221   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:40.868229   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:40.868250   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:40.868267   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:40.868413   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:40.868676   28131 pod_ready.go:92] pod "kube-proxy-hx9f7" in "kube-system" namespace has status "Ready":"True"
	I0130 20:00:40.868689   28131 pod_ready.go:81] duration metric: took 5.086229ms waiting for pod "kube-proxy-hx9f7" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:40.868696   28131 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j5sr4" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:40.868732   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5sr4
	I0130 20:00:40.868738   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:40.868745   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:40.868751   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:40.870384   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:00:40.870407   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:40.870414   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:40.870426   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:40.870437   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:40.870445   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:40 GMT
	I0130 20:00:40.870454   28131 round_trippers.go:580]     Audit-Id: 6883a84e-6ce8-4610-a613-68202ce9be50
	I0130 20:00:40.870465   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:40.870622   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j5sr4","generateName":"kube-proxy-","namespace":"kube-system","uid":"d6bacfbc-c1e8-4dd2-bd48-778725887a72","resourceVersion":"699","creationTimestamp":"2024-01-30T19:51:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1e1c3365-a3ba-434b-96dd-44f8afef011c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e1c3365-a3ba-434b-96dd-44f8afef011c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0130 20:00:40.884175   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m03
	I0130 20:00:40.884239   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:40.884247   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:40.884253   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:40.886973   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:40.886990   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:40.886998   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:40 GMT
	I0130 20:00:40.887003   28131 round_trippers.go:580]     Audit-Id: ecdda493-0ab0-4db6-9842-288bb2edad72
	I0130 20:00:40.887011   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:40.887016   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:40.887021   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:40.887036   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:40.887985   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652-m03","uid":"6e43dfc4-d01d-44de-b61c-e668bf1447ff","resourceVersion":"865","creationTimestamp":"2024-01-30T19:52:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T19_52_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:52:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3964 chars]
	I0130 20:00:40.888216   28131 pod_ready.go:92] pod "kube-proxy-j5sr4" in "kube-system" namespace has status "Ready":"True"
	I0130 20:00:40.888230   28131 pod_ready.go:81] duration metric: took 19.526736ms waiting for pod "kube-proxy-j5sr4" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:40.888238   28131 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rbwvp" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:41.083565   28131 request.go:629] Waited for 195.269992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rbwvp
	I0130 20:00:41.083660   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rbwvp
	I0130 20:00:41.083671   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:41.083678   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:41.083684   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:41.086350   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:41.086368   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:41.086374   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:41.086379   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:41.086386   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:41.086394   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:41 GMT
	I0130 20:00:41.086402   28131 round_trippers.go:580]     Audit-Id: 6e4b7d21-3186-4048-b687-d454465e57ef
	I0130 20:00:41.086410   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:41.086775   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rbwvp","generateName":"kube-proxy-","namespace":"kube-system","uid":"2cd3c663-bf55-49b2-9120-101ac59912fd","resourceVersion":"484","creationTimestamp":"2024-01-30T19:50:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1e1c3365-a3ba-434b-96dd-44f8afef011c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e1c3365-a3ba-434b-96dd-44f8afef011c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0130 20:00:41.283479   28131 request.go:629] Waited for 196.314366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m02
	I0130 20:00:41.283557   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m02
	I0130 20:00:41.283562   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:41.283571   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:41.283581   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:41.286450   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:41.286465   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:41.286470   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:41.286475   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:41.286480   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:41.286485   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:41 GMT
	I0130 20:00:41.286494   28131 round_trippers.go:580]     Audit-Id: aa21b115-cee2-4117-a1e4-ffcd4ce7367a
	I0130 20:00:41.286503   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:41.286814   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652-m02","uid":"dff06704-3844-4766-a722-a280b6a04c06","resourceVersion":"777","creationTimestamp":"2024-01-30T19:50:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T19_52_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4236 chars]
	I0130 20:00:41.287068   28131 pod_ready.go:92] pod "kube-proxy-rbwvp" in "kube-system" namespace has status "Ready":"True"
	I0130 20:00:41.287082   28131 pod_ready.go:81] duration metric: took 398.838262ms waiting for pod "kube-proxy-rbwvp" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:41.287090   28131 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:41.483807   28131 request.go:629] Waited for 196.388819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-572652
	I0130 20:00:41.483904   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-572652
	I0130 20:00:41.483915   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:41.484117   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:41.484136   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:41.492257   28131 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0130 20:00:41.492280   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:41.492287   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:41 GMT
	I0130 20:00:41.492292   28131 round_trippers.go:580]     Audit-Id: cf8ccedb-1890-461d-8a2f-b0d11366e7b5
	I0130 20:00:41.492298   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:41.492303   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:41.492308   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:41.492313   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:41.492740   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-572652","namespace":"kube-system","uid":"ee4d8608-40cb-4281-ac1f-bc5ac41ff27d","resourceVersion":"855","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"85e85fa7283981ab3a029cbc7c4cbcc1","kubernetes.io/config.mirror":"85e85fa7283981ab3a029cbc7c4cbcc1","kubernetes.io/config.seen":"2024-01-30T19:50:00.428289879Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0130 20:00:41.683922   28131 request.go:629] Waited for 190.78489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:41.683985   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:00:41.683990   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:41.683997   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:41.684003   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:41.686767   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:41.686791   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:41.686801   28131 round_trippers.go:580]     Audit-Id: 746781c3-940b-49d1-a092-47dab0656034
	I0130 20:00:41.686808   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:41.686816   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:41.686827   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:41.686838   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:41.686849   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:41 GMT
	I0130 20:00:41.687349   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 20:00:41.687657   28131 pod_ready.go:92] pod "kube-scheduler-multinode-572652" in "kube-system" namespace has status "Ready":"True"
	I0130 20:00:41.687674   28131 pod_ready.go:81] duration metric: took 400.578237ms waiting for pod "kube-scheduler-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:00:41.687684   28131 pod_ready.go:38] duration metric: took 9.36766942s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:00:41.687700   28131 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:00:41.687745   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:00:41.701324   28131 command_runner.go:130] > 1095
	I0130 20:00:41.701371   28131 api_server.go:72] duration metric: took 10.990604328s to wait for apiserver process to appear ...
	I0130 20:00:41.701383   28131 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:00:41.701404   28131 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0130 20:00:41.706007   28131 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I0130 20:00:41.706064   28131 round_trippers.go:463] GET https://192.168.39.186:8443/version
	I0130 20:00:41.706073   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:41.706081   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:41.706090   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:41.707075   28131 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0130 20:00:41.707089   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:41.707095   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:41.707101   28131 round_trippers.go:580]     Content-Length: 264
	I0130 20:00:41.707106   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:41 GMT
	I0130 20:00:41.707111   28131 round_trippers.go:580]     Audit-Id: 311bf511-cc11-42d2-937c-28049548c8c9
	I0130 20:00:41.707119   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:41.707127   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:41.707135   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:41.707158   28131 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0130 20:00:41.707199   28131 api_server.go:141] control plane version: v1.28.4
	I0130 20:00:41.707212   28131 api_server.go:131] duration metric: took 5.823295ms to wait for apiserver health ...
	I0130 20:00:41.707220   28131 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:00:41.884418   28131 request.go:629] Waited for 177.127285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods
	I0130 20:00:41.884480   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods
	I0130 20:00:41.884485   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:41.884493   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:41.884499   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:41.888894   28131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0130 20:00:41.888934   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:41.888947   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:41.888954   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:41.888962   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:41 GMT
	I0130 20:00:41.888969   28131 round_trippers.go:580]     Audit-Id: 22f8141e-a3aa-4d8c-865b-6f6a6c057f88
	I0130 20:00:41.888978   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:41.888987   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:41.890394   28131 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"877"},"items":[{"metadata":{"name":"coredns-5dd5756b68-579fc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8ed4a94c-417c-480d-9f9a-4101a5103066","resourceVersion":"850","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"39fdf010-d57e-4327-975b-6a5e640212c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fdf010-d57e-4327-975b-6a5e640212c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81878 chars]
	I0130 20:00:41.892746   28131 system_pods.go:59] 12 kube-system pods found
	I0130 20:00:41.892766   28131 system_pods.go:61] "coredns-5dd5756b68-579fc" [8ed4a94c-417c-480d-9f9a-4101a5103066] Running
	I0130 20:00:41.892774   28131 system_pods.go:61] "etcd-multinode-572652" [e44ed93f-1c85-4d27-bacb-f454d6eaa0b6] Running
	I0130 20:00:41.892779   28131 system_pods.go:61] "kindnet-rzx54" [87aab713-13c1-4fd2-bc90-73b2998226dc] Running
	I0130 20:00:41.892785   28131 system_pods.go:61] "kindnet-srbck" [dd92c807-033f-496a-bff0-004577831a5c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0130 20:00:41.892791   28131 system_pods.go:61] "kindnet-w5jvc" [b629bb0f-d26e-4db0-9776-0e5e400dc7d7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0130 20:00:41.892799   28131 system_pods.go:61] "kube-apiserver-multinode-572652" [fc451607-277c-45fe-a0f9-a3502db0251b] Running
	I0130 20:00:41.892805   28131 system_pods.go:61] "kube-controller-manager-multinode-572652" [ce85a6a9-3600-41a9-824a-d01c009aead2] Running
	I0130 20:00:41.892815   28131 system_pods.go:61] "kube-proxy-hx9f7" [95d8777b-0e61-4662-a7a6-1fb5e7b4ae29] Running
	I0130 20:00:41.892819   28131 system_pods.go:61] "kube-proxy-j5sr4" [d6bacfbc-c1e8-4dd2-bd48-778725887a72] Running
	I0130 20:00:41.892822   28131 system_pods.go:61] "kube-proxy-rbwvp" [2cd3c663-bf55-49b2-9120-101ac59912fd] Running
	I0130 20:00:41.892826   28131 system_pods.go:61] "kube-scheduler-multinode-572652" [ee4d8608-40cb-4281-ac1f-bc5ac41ff27d] Running
	I0130 20:00:41.892833   28131 system_pods.go:61] "storage-provisioner" [a1eb366d-4b7c-4900-9e2e-83ebcee3d015] Running
	I0130 20:00:41.892839   28131 system_pods.go:74] duration metric: took 185.60982ms to wait for pod list to return data ...
	I0130 20:00:41.892847   28131 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:00:42.084231   28131 request.go:629] Waited for 191.318012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/namespaces/default/serviceaccounts
	I0130 20:00:42.084298   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/default/serviceaccounts
	I0130 20:00:42.084305   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:42.084315   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:42.084323   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:42.086891   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:00:42.086914   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:42.086921   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:42.086926   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:42.086932   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:42.086937   28131 round_trippers.go:580]     Content-Length: 261
	I0130 20:00:42.086942   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:42 GMT
	I0130 20:00:42.086947   28131 round_trippers.go:580]     Audit-Id: fa75089b-7f8c-426b-b310-a3425f0f712e
	I0130 20:00:42.086955   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:42.086971   28131 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"877"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"7175b97d-e7ba-4fcf-b7e9-e331c9674ef5","resourceVersion":"333","creationTimestamp":"2024-01-30T19:50:12Z"}}]}
	I0130 20:00:42.087151   28131 default_sa.go:45] found service account: "default"
	I0130 20:00:42.087168   28131 default_sa.go:55] duration metric: took 194.314419ms for default service account to be created ...
	I0130 20:00:42.087175   28131 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:00:42.283543   28131 request.go:629] Waited for 196.309835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods
	I0130 20:00:42.283627   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods
	I0130 20:00:42.283638   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:42.283650   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:42.283659   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:42.289220   28131 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0130 20:00:42.289241   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:42.289251   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:42.289259   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:42.289267   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:42 GMT
	I0130 20:00:42.289275   28131 round_trippers.go:580]     Audit-Id: fe150045-27ff-4445-bb0d-b36583c377e6
	I0130 20:00:42.289281   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:42.289288   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:42.290331   28131 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"877"},"items":[{"metadata":{"name":"coredns-5dd5756b68-579fc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8ed4a94c-417c-480d-9f9a-4101a5103066","resourceVersion":"850","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"39fdf010-d57e-4327-975b-6a5e640212c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fdf010-d57e-4327-975b-6a5e640212c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81878 chars]
	I0130 20:00:42.292693   28131 system_pods.go:86] 12 kube-system pods found
	I0130 20:00:42.292715   28131 system_pods.go:89] "coredns-5dd5756b68-579fc" [8ed4a94c-417c-480d-9f9a-4101a5103066] Running
	I0130 20:00:42.292721   28131 system_pods.go:89] "etcd-multinode-572652" [e44ed93f-1c85-4d27-bacb-f454d6eaa0b6] Running
	I0130 20:00:42.292726   28131 system_pods.go:89] "kindnet-rzx54" [87aab713-13c1-4fd2-bc90-73b2998226dc] Running
	I0130 20:00:42.292732   28131 system_pods.go:89] "kindnet-srbck" [dd92c807-033f-496a-bff0-004577831a5c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0130 20:00:42.292738   28131 system_pods.go:89] "kindnet-w5jvc" [b629bb0f-d26e-4db0-9776-0e5e400dc7d7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0130 20:00:42.292745   28131 system_pods.go:89] "kube-apiserver-multinode-572652" [fc451607-277c-45fe-a0f9-a3502db0251b] Running
	I0130 20:00:42.292751   28131 system_pods.go:89] "kube-controller-manager-multinode-572652" [ce85a6a9-3600-41a9-824a-d01c009aead2] Running
	I0130 20:00:42.292755   28131 system_pods.go:89] "kube-proxy-hx9f7" [95d8777b-0e61-4662-a7a6-1fb5e7b4ae29] Running
	I0130 20:00:42.292759   28131 system_pods.go:89] "kube-proxy-j5sr4" [d6bacfbc-c1e8-4dd2-bd48-778725887a72] Running
	I0130 20:00:42.292763   28131 system_pods.go:89] "kube-proxy-rbwvp" [2cd3c663-bf55-49b2-9120-101ac59912fd] Running
	I0130 20:00:42.292767   28131 system_pods.go:89] "kube-scheduler-multinode-572652" [ee4d8608-40cb-4281-ac1f-bc5ac41ff27d] Running
	I0130 20:00:42.292773   28131 system_pods.go:89] "storage-provisioner" [a1eb366d-4b7c-4900-9e2e-83ebcee3d015] Running
	I0130 20:00:42.292780   28131 system_pods.go:126] duration metric: took 205.599837ms to wait for k8s-apps to be running ...
	I0130 20:00:42.292786   28131 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:00:42.292826   28131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:00:42.308083   28131 system_svc.go:56] duration metric: took 15.290483ms WaitForService to wait for kubelet.
	I0130 20:00:42.308100   28131 kubeadm.go:581] duration metric: took 11.597335646s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:00:42.308116   28131 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:00:42.484517   28131 request.go:629] Waited for 176.324268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes
	I0130 20:00:42.484584   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes
	I0130 20:00:42.484589   28131 round_trippers.go:469] Request Headers:
	I0130 20:00:42.484596   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:00:42.484602   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:00:42.488535   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:00:42.488554   28131 round_trippers.go:577] Response Headers:
	I0130 20:00:42.488561   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:00:42.488566   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:00:42.488571   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:00:42.488576   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:00:42 GMT
	I0130 20:00:42.488582   28131 round_trippers.go:580]     Audit-Id: 8752a76f-e6f5-4b30-a781-f05f644f4d8a
	I0130 20:00:42.488587   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:00:42.488970   28131 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"877"},"items":[{"metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"844","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16179 chars]
	I0130 20:00:42.489501   28131 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:00:42.489517   28131 node_conditions.go:123] node cpu capacity is 2
	I0130 20:00:42.489525   28131 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:00:42.489529   28131 node_conditions.go:123] node cpu capacity is 2
	I0130 20:00:42.489533   28131 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:00:42.489536   28131 node_conditions.go:123] node cpu capacity is 2
	I0130 20:00:42.489540   28131 node_conditions.go:105] duration metric: took 181.42051ms to run NodePressure ...
	I0130 20:00:42.489549   28131 start.go:228] waiting for startup goroutines ...
	I0130 20:00:42.489560   28131 start.go:233] waiting for cluster config update ...
	I0130 20:00:42.489566   28131 start.go:242] writing updated cluster config ...
	I0130 20:00:42.489985   28131 config.go:182] Loaded profile config "multinode-572652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:00:42.490063   28131 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/config.json ...
	I0130 20:00:42.492313   28131 out.go:177] * Starting worker node multinode-572652-m02 in cluster multinode-572652
	I0130 20:00:42.493413   28131 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 20:00:42.493439   28131 cache.go:56] Caching tarball of preloaded images
	I0130 20:00:42.493525   28131 preload.go:174] Found /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 20:00:42.493540   28131 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0130 20:00:42.493632   28131 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/config.json ...
	I0130 20:00:42.493812   28131 start.go:365] acquiring machines lock for multinode-572652-m02: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 20:00:42.493869   28131 start.go:369] acquired machines lock for "multinode-572652-m02" in 34.879µs
	I0130 20:00:42.493888   28131 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:00:42.493895   28131 fix.go:54] fixHost starting: m02
	I0130 20:00:42.494170   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:00:42.494202   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:00:42.508568   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43143
	I0130 20:00:42.508973   28131 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:00:42.509418   28131 main.go:141] libmachine: Using API Version  1
	I0130 20:00:42.509438   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:00:42.509705   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:00:42.509891   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .DriverName
	I0130 20:00:42.510006   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetState
	I0130 20:00:42.511578   28131 fix.go:102] recreateIfNeeded on multinode-572652-m02: state=Running err=<nil>
	W0130 20:00:42.511596   28131 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:00:42.514541   28131 out.go:177] * Updating the running kvm2 "multinode-572652-m02" VM ...
	I0130 20:00:42.515918   28131 machine.go:88] provisioning docker machine ...
	I0130 20:00:42.515940   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .DriverName
	I0130 20:00:42.516146   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetMachineName
	I0130 20:00:42.516293   28131 buildroot.go:166] provisioning hostname "multinode-572652-m02"
	I0130 20:00:42.516309   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetMachineName
	I0130 20:00:42.516454   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHHostname
	I0130 20:00:42.519069   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:00:42.519499   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:12:51", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:50:37 +0000 UTC Type:0 Mac:52:54:00:64:12:51 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-572652-m02 Clientid:01:52:54:00:64:12:51}
	I0130 20:00:42.519541   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:00:42.519660   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHPort
	I0130 20:00:42.519837   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHKeyPath
	I0130 20:00:42.519980   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHKeyPath
	I0130 20:00:42.520113   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHUsername
	I0130 20:00:42.520238   28131 main.go:141] libmachine: Using SSH client type: native
	I0130 20:00:42.520537   28131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0130 20:00:42.520550   28131 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-572652-m02 && echo "multinode-572652-m02" | sudo tee /etc/hostname
	I0130 20:00:42.665142   28131 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-572652-m02
	
	I0130 20:00:42.665191   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHHostname
	I0130 20:00:42.667934   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:00:42.668259   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:12:51", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:50:37 +0000 UTC Type:0 Mac:52:54:00:64:12:51 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-572652-m02 Clientid:01:52:54:00:64:12:51}
	I0130 20:00:42.668291   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:00:42.668489   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHPort
	I0130 20:00:42.668680   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHKeyPath
	I0130 20:00:42.668853   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHKeyPath
	I0130 20:00:42.669027   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHUsername
	I0130 20:00:42.669173   28131 main.go:141] libmachine: Using SSH client type: native
	I0130 20:00:42.669472   28131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0130 20:00:42.669489   28131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-572652-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-572652-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-572652-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:00:42.799808   28131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:00:42.799833   28131 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:00:42.799848   28131 buildroot.go:174] setting up certificates
	I0130 20:00:42.799854   28131 provision.go:83] configureAuth start
	I0130 20:00:42.799862   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetMachineName
	I0130 20:00:42.800135   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetIP
	I0130 20:00:42.802571   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:00:42.802870   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:12:51", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:50:37 +0000 UTC Type:0 Mac:52:54:00:64:12:51 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-572652-m02 Clientid:01:52:54:00:64:12:51}
	I0130 20:00:42.802896   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:00:42.803085   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHHostname
	I0130 20:00:42.805032   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:00:42.805335   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:12:51", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:50:37 +0000 UTC Type:0 Mac:52:54:00:64:12:51 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-572652-m02 Clientid:01:52:54:00:64:12:51}
	I0130 20:00:42.805361   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:00:42.805521   28131 provision.go:138] copyHostCerts
	I0130 20:00:42.805550   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:00:42.805581   28131 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:00:42.805592   28131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:00:42.805659   28131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:00:42.805728   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:00:42.805746   28131 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:00:42.805753   28131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:00:42.805784   28131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:00:42.805849   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:00:42.805866   28131 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:00:42.805872   28131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:00:42.805904   28131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:00:42.805967   28131 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.multinode-572652-m02 san=[192.168.39.137 192.168.39.137 localhost 127.0.0.1 minikube multinode-572652-m02]
	I0130 20:00:42.948612   28131 provision.go:172] copyRemoteCerts
	I0130 20:00:42.948666   28131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:00:42.948700   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHHostname
	I0130 20:00:42.951364   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:00:42.951710   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:12:51", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:50:37 +0000 UTC Type:0 Mac:52:54:00:64:12:51 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-572652-m02 Clientid:01:52:54:00:64:12:51}
	I0130 20:00:42.951734   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:00:42.951970   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHPort
	I0130 20:00:42.952179   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHKeyPath
	I0130 20:00:42.952349   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHUsername
	I0130 20:00:42.952506   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652-m02/id_rsa Username:docker}
	I0130 20:00:43.044029   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0130 20:00:43.044111   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0130 20:00:43.066975   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0130 20:00:43.067044   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 20:00:43.089576   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0130 20:00:43.089643   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:00:43.112168   28131 provision.go:86] duration metric: configureAuth took 312.302381ms
	I0130 20:00:43.112197   28131 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:00:43.112412   28131 config.go:182] Loaded profile config "multinode-572652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:00:43.112480   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHHostname
	I0130 20:00:43.114632   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:00:43.115047   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:12:51", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:50:37 +0000 UTC Type:0 Mac:52:54:00:64:12:51 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-572652-m02 Clientid:01:52:54:00:64:12:51}
	I0130 20:00:43.115068   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:00:43.115289   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHPort
	I0130 20:00:43.115466   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHKeyPath
	I0130 20:00:43.115640   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHKeyPath
	I0130 20:00:43.115792   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHUsername
	I0130 20:00:43.115951   28131 main.go:141] libmachine: Using SSH client type: native
	I0130 20:00:43.116249   28131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0130 20:00:43.116264   28131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:02:13.692427   28131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:02:13.692460   28131 machine.go:91] provisioned docker machine in 1m31.176520932s
	I0130 20:02:13.692474   28131 start.go:300] post-start starting for "multinode-572652-m02" (driver="kvm2")
	I0130 20:02:13.692489   28131 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:02:13.692515   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .DriverName
	I0130 20:02:13.692816   28131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:02:13.692842   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHHostname
	I0130 20:02:13.695617   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:02:13.696031   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:12:51", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:50:37 +0000 UTC Type:0 Mac:52:54:00:64:12:51 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-572652-m02 Clientid:01:52:54:00:64:12:51}
	I0130 20:02:13.696062   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:02:13.696158   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHPort
	I0130 20:02:13.696328   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHKeyPath
	I0130 20:02:13.696474   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHUsername
	I0130 20:02:13.696587   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652-m02/id_rsa Username:docker}
	I0130 20:02:13.793610   28131 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:02:13.797756   28131 command_runner.go:130] > NAME=Buildroot
	I0130 20:02:13.797773   28131 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0130 20:02:13.797777   28131 command_runner.go:130] > ID=buildroot
	I0130 20:02:13.797785   28131 command_runner.go:130] > VERSION_ID=2021.02.12
	I0130 20:02:13.797792   28131 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0130 20:02:13.797982   28131 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:02:13.798003   28131 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:02:13.798073   28131 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:02:13.798159   28131 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:02:13.798170   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> /etc/ssl/certs/116672.pem
	I0130 20:02:13.798265   28131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:02:13.806795   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:02:13.833710   28131 start.go:303] post-start completed in 141.224088ms
	I0130 20:02:13.833731   28131 fix.go:56] fixHost completed within 1m31.339835111s
	I0130 20:02:13.833753   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHHostname
	I0130 20:02:13.836225   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:02:13.836673   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:12:51", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:50:37 +0000 UTC Type:0 Mac:52:54:00:64:12:51 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-572652-m02 Clientid:01:52:54:00:64:12:51}
	I0130 20:02:13.836704   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:02:13.836854   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHPort
	I0130 20:02:13.837015   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHKeyPath
	I0130 20:02:13.837151   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHKeyPath
	I0130 20:02:13.837281   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHUsername
	I0130 20:02:13.837418   28131 main.go:141] libmachine: Using SSH client type: native
	I0130 20:02:13.837722   28131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0130 20:02:13.837733   28131 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:02:13.968203   28131 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706644933.959969537
	
	I0130 20:02:13.968225   28131 fix.go:206] guest clock: 1706644933.959969537
	I0130 20:02:13.968232   28131 fix.go:219] Guest: 2024-01-30 20:02:13.959969537 +0000 UTC Remote: 2024-01-30 20:02:13.833736248 +0000 UTC m=+452.429444391 (delta=126.233289ms)
	I0130 20:02:13.968244   28131 fix.go:190] guest clock delta is within tolerance: 126.233289ms
	I0130 20:02:13.968249   28131 start.go:83] releasing machines lock for "multinode-572652-m02", held for 1m31.474368968s
	I0130 20:02:13.968268   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .DriverName
	I0130 20:02:13.968524   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetIP
	I0130 20:02:13.971034   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:02:13.971440   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:12:51", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:50:37 +0000 UTC Type:0 Mac:52:54:00:64:12:51 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-572652-m02 Clientid:01:52:54:00:64:12:51}
	I0130 20:02:13.971469   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:02:13.973326   28131 out.go:177] * Found network options:
	I0130 20:02:13.974518   28131 out.go:177]   - NO_PROXY=192.168.39.186
	W0130 20:02:13.975767   28131 proxy.go:119] fail to check proxy env: Error ip not in block
	I0130 20:02:13.975789   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .DriverName
	I0130 20:02:13.976285   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .DriverName
	I0130 20:02:13.976460   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .DriverName
	I0130 20:02:13.976550   28131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:02:13.976588   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHHostname
	W0130 20:02:13.976670   28131 proxy.go:119] fail to check proxy env: Error ip not in block
	I0130 20:02:13.976734   28131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:02:13.976754   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHHostname
	I0130 20:02:13.979324   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:02:13.979636   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:02:13.979690   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:12:51", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:50:37 +0000 UTC Type:0 Mac:52:54:00:64:12:51 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-572652-m02 Clientid:01:52:54:00:64:12:51}
	I0130 20:02:13.979719   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:02:13.979856   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHPort
	I0130 20:02:13.980016   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHKeyPath
	I0130 20:02:13.980154   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHUsername
	I0130 20:02:13.980174   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:12:51", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:50:37 +0000 UTC Type:0 Mac:52:54:00:64:12:51 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-572652-m02 Clientid:01:52:54:00:64:12:51}
	I0130 20:02:13.980212   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:02:13.980314   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652-m02/id_rsa Username:docker}
	I0130 20:02:13.980411   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHPort
	I0130 20:02:13.980517   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHKeyPath
	I0130 20:02:13.980635   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHUsername
	I0130 20:02:13.980753   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652-m02/id_rsa Username:docker}
	I0130 20:02:14.218079   28131 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0130 20:02:14.218135   28131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0130 20:02:14.224196   28131 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0130 20:02:14.224422   28131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:02:14.224474   28131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:02:14.232582   28131 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0130 20:02:14.232604   28131 start.go:475] detecting cgroup driver to use...
	I0130 20:02:14.232673   28131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:02:14.245562   28131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:02:14.257381   28131 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:02:14.257436   28131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:02:14.270268   28131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:02:14.282452   28131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:02:14.417464   28131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:02:14.541614   28131 docker.go:233] disabling docker service ...
	I0130 20:02:14.541685   28131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:02:14.556118   28131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:02:14.572066   28131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:02:14.706140   28131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:02:14.835413   28131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:02:14.847390   28131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:02:14.863936   28131 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0130 20:02:14.864266   28131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:02:14.864337   28131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:02:14.873316   28131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:02:14.873369   28131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:02:14.883677   28131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:02:14.892936   28131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:02:14.902075   28131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:02:14.911367   28131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:02:14.919804   28131 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0130 20:02:14.919856   28131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:02:14.928961   28131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:02:15.043003   28131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:02:21.811093   28131 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.768057454s)
	I0130 20:02:21.811115   28131 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:02:21.811155   28131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:02:21.815856   28131 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0130 20:02:21.815873   28131 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0130 20:02:21.815882   28131 command_runner.go:130] > Device: 16h/22d	Inode: 1228        Links: 1
	I0130 20:02:21.815892   28131 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0130 20:02:21.815900   28131 command_runner.go:130] > Access: 2024-01-30 20:02:21.735505111 +0000
	I0130 20:02:21.815908   28131 command_runner.go:130] > Modify: 2024-01-30 20:02:21.735505111 +0000
	I0130 20:02:21.815919   28131 command_runner.go:130] > Change: 2024-01-30 20:02:21.735505111 +0000
	I0130 20:02:21.815934   28131 command_runner.go:130] >  Birth: -
	I0130 20:02:21.816254   28131 start.go:543] Will wait 60s for crictl version
	I0130 20:02:21.816304   28131 ssh_runner.go:195] Run: which crictl
	I0130 20:02:21.819700   28131 command_runner.go:130] > /usr/bin/crictl
	I0130 20:02:21.819998   28131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:02:21.859124   28131 command_runner.go:130] > Version:  0.1.0
	I0130 20:02:21.859150   28131 command_runner.go:130] > RuntimeName:  cri-o
	I0130 20:02:21.859157   28131 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0130 20:02:21.859167   28131 command_runner.go:130] > RuntimeApiVersion:  v1
	I0130 20:02:21.859185   28131 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:02:21.859247   28131 ssh_runner.go:195] Run: crio --version
	I0130 20:02:21.901764   28131 command_runner.go:130] > crio version 1.24.1
	I0130 20:02:21.901789   28131 command_runner.go:130] > Version:          1.24.1
	I0130 20:02:21.901799   28131 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0130 20:02:21.901807   28131 command_runner.go:130] > GitTreeState:     dirty
	I0130 20:02:21.901816   28131 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0130 20:02:21.901824   28131 command_runner.go:130] > GoVersion:        go1.19.9
	I0130 20:02:21.901830   28131 command_runner.go:130] > Compiler:         gc
	I0130 20:02:21.901837   28131 command_runner.go:130] > Platform:         linux/amd64
	I0130 20:02:21.901844   28131 command_runner.go:130] > Linkmode:         dynamic
	I0130 20:02:21.901860   28131 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0130 20:02:21.901867   28131 command_runner.go:130] > SeccompEnabled:   true
	I0130 20:02:21.901875   28131 command_runner.go:130] > AppArmorEnabled:  false
	I0130 20:02:21.902073   28131 ssh_runner.go:195] Run: crio --version
	I0130 20:02:21.945832   28131 command_runner.go:130] > crio version 1.24.1
	I0130 20:02:21.945872   28131 command_runner.go:130] > Version:          1.24.1
	I0130 20:02:21.945884   28131 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0130 20:02:21.945891   28131 command_runner.go:130] > GitTreeState:     dirty
	I0130 20:02:21.945901   28131 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0130 20:02:21.945909   28131 command_runner.go:130] > GoVersion:        go1.19.9
	I0130 20:02:21.945926   28131 command_runner.go:130] > Compiler:         gc
	I0130 20:02:21.945936   28131 command_runner.go:130] > Platform:         linux/amd64
	I0130 20:02:21.945948   28131 command_runner.go:130] > Linkmode:         dynamic
	I0130 20:02:21.945963   28131 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0130 20:02:21.945973   28131 command_runner.go:130] > SeccompEnabled:   true
	I0130 20:02:21.945984   28131 command_runner.go:130] > AppArmorEnabled:  false
	I0130 20:02:21.949284   28131 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 20:02:21.950650   28131 out.go:177]   - env NO_PROXY=192.168.39.186
	I0130 20:02:21.951886   28131 main.go:141] libmachine: (multinode-572652-m02) Calling .GetIP
	I0130 20:02:21.954496   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:02:21.954870   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:12:51", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:50:37 +0000 UTC Type:0 Mac:52:54:00:64:12:51 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-572652-m02 Clientid:01:52:54:00:64:12:51}
	I0130 20:02:21.954899   28131 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 20:02:21.955083   28131 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 20:02:21.959139   28131 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0130 20:02:21.959176   28131 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652 for IP: 192.168.39.137
	I0130 20:02:21.959199   28131 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:02:21.959392   28131 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:02:21.959447   28131 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:02:21.959465   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0130 20:02:21.959486   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0130 20:02:21.959505   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0130 20:02:21.959523   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0130 20:02:21.959587   28131 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:02:21.959628   28131 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:02:21.959654   28131 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:02:21.959701   28131 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:02:21.959736   28131 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:02:21.959777   28131 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:02:21.959833   28131 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:02:21.959870   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem -> /usr/share/ca-certificates/11667.pem
	I0130 20:02:21.959890   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> /usr/share/ca-certificates/116672.pem
	I0130 20:02:21.959917   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:02:21.960256   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:02:21.984660   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:02:22.007450   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:02:22.030986   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:02:22.053971   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:02:22.075963   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:02:22.099739   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:02:22.123067   28131 ssh_runner.go:195] Run: openssl version
	I0130 20:02:22.128723   28131 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0130 20:02:22.129188   28131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:02:22.139029   28131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:02:22.143617   28131 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:02:22.144010   28131 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:02:22.144063   28131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:02:22.149424   28131 command_runner.go:130] > 3ec20f2e
	I0130 20:02:22.149488   28131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:02:22.157358   28131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:02:22.166389   28131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:02:22.170714   28131 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:02:22.170878   28131 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:02:22.170933   28131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:02:22.176321   28131 command_runner.go:130] > b5213941
	I0130 20:02:22.176366   28131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:02:22.184618   28131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:02:22.194349   28131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:02:22.199306   28131 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:02:22.199650   28131 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:02:22.199696   28131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:02:22.204992   28131 command_runner.go:130] > 51391683
	I0130 20:02:22.205176   28131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:02:22.213131   28131 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:02:22.216904   28131 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0130 20:02:22.217246   28131 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0130 20:02:22.217316   28131 ssh_runner.go:195] Run: crio config
	I0130 20:02:22.283692   28131 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0130 20:02:22.283716   28131 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0130 20:02:22.283723   28131 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0130 20:02:22.283727   28131 command_runner.go:130] > #
	I0130 20:02:22.283733   28131 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0130 20:02:22.283739   28131 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0130 20:02:22.283745   28131 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0130 20:02:22.283753   28131 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0130 20:02:22.283757   28131 command_runner.go:130] > # reload'.
	I0130 20:02:22.283763   28131 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0130 20:02:22.283772   28131 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0130 20:02:22.283784   28131 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0130 20:02:22.283800   28131 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0130 20:02:22.283808   28131 command_runner.go:130] > [crio]
	I0130 20:02:22.283816   28131 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0130 20:02:22.283826   28131 command_runner.go:130] > # containers images, in this directory.
	I0130 20:02:22.283836   28131 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0130 20:02:22.283852   28131 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0130 20:02:22.283861   28131 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0130 20:02:22.283867   28131 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0130 20:02:22.283877   28131 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0130 20:02:22.283884   28131 command_runner.go:130] > storage_driver = "overlay"
	I0130 20:02:22.283894   28131 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0130 20:02:22.283903   28131 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0130 20:02:22.283913   28131 command_runner.go:130] > storage_option = [
	I0130 20:02:22.283921   28131 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0130 20:02:22.283927   28131 command_runner.go:130] > ]
	I0130 20:02:22.283939   28131 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0130 20:02:22.283946   28131 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0130 20:02:22.283959   28131 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0130 20:02:22.283968   28131 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0130 20:02:22.283979   28131 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0130 20:02:22.283990   28131 command_runner.go:130] > # always happen on a node reboot
	I0130 20:02:22.284000   28131 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0130 20:02:22.284012   28131 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0130 20:02:22.284025   28131 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0130 20:02:22.284054   28131 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0130 20:02:22.284069   28131 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0130 20:02:22.284081   28131 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0130 20:02:22.284097   28131 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0130 20:02:22.284105   28131 command_runner.go:130] > # internal_wipe = true
	I0130 20:02:22.284116   28131 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0130 20:02:22.284128   28131 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0130 20:02:22.284140   28131 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0130 20:02:22.284149   28131 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0130 20:02:22.284158   28131 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0130 20:02:22.284165   28131 command_runner.go:130] > [crio.api]
	I0130 20:02:22.284174   28131 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0130 20:02:22.284185   28131 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0130 20:02:22.284197   28131 command_runner.go:130] > # IP address on which the stream server will listen.
	I0130 20:02:22.284205   28131 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0130 20:02:22.284219   28131 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0130 20:02:22.284231   28131 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0130 20:02:22.284238   28131 command_runner.go:130] > # stream_port = "0"
	I0130 20:02:22.284244   28131 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0130 20:02:22.284251   28131 command_runner.go:130] > # stream_enable_tls = false
	I0130 20:02:22.284257   28131 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0130 20:02:22.284264   28131 command_runner.go:130] > # stream_idle_timeout = ""
	I0130 20:02:22.284270   28131 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0130 20:02:22.284280   28131 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0130 20:02:22.284288   28131 command_runner.go:130] > # minutes.
	I0130 20:02:22.284296   28131 command_runner.go:130] > # stream_tls_cert = ""
	I0130 20:02:22.284308   28131 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0130 20:02:22.284321   28131 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0130 20:02:22.284331   28131 command_runner.go:130] > # stream_tls_key = ""
	I0130 20:02:22.284345   28131 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0130 20:02:22.284358   28131 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0130 20:02:22.284367   28131 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0130 20:02:22.284372   28131 command_runner.go:130] > # stream_tls_ca = ""
	I0130 20:02:22.284382   28131 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0130 20:02:22.284387   28131 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0130 20:02:22.284397   28131 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0130 20:02:22.284418   28131 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0130 20:02:22.284439   28131 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0130 20:02:22.284448   28131 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0130 20:02:22.284452   28131 command_runner.go:130] > [crio.runtime]
	I0130 20:02:22.284463   28131 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0130 20:02:22.284469   28131 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0130 20:02:22.284474   28131 command_runner.go:130] > # "nofile=1024:2048"
	I0130 20:02:22.284482   28131 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0130 20:02:22.284487   28131 command_runner.go:130] > # default_ulimits = [
	I0130 20:02:22.284492   28131 command_runner.go:130] > # ]
	I0130 20:02:22.284499   28131 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0130 20:02:22.284503   28131 command_runner.go:130] > # no_pivot = false
	I0130 20:02:22.284511   28131 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0130 20:02:22.284517   28131 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0130 20:02:22.284525   28131 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0130 20:02:22.284531   28131 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0130 20:02:22.284539   28131 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0130 20:02:22.284548   28131 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0130 20:02:22.284555   28131 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0130 20:02:22.284560   28131 command_runner.go:130] > # Cgroup setting for conmon
	I0130 20:02:22.284569   28131 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0130 20:02:22.284575   28131 command_runner.go:130] > conmon_cgroup = "pod"
	I0130 20:02:22.284581   28131 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0130 20:02:22.284589   28131 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0130 20:02:22.284597   28131 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0130 20:02:22.284605   28131 command_runner.go:130] > conmon_env = [
	I0130 20:02:22.284611   28131 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0130 20:02:22.284614   28131 command_runner.go:130] > ]
	I0130 20:02:22.284620   28131 command_runner.go:130] > # Additional environment variables to set for all the
	I0130 20:02:22.284626   28131 command_runner.go:130] > # containers. These are overridden if set in the
	I0130 20:02:22.284633   28131 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0130 20:02:22.284639   28131 command_runner.go:130] > # default_env = [
	I0130 20:02:22.284643   28131 command_runner.go:130] > # ]
	I0130 20:02:22.284651   28131 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0130 20:02:22.284655   28131 command_runner.go:130] > # selinux = false
	I0130 20:02:22.284669   28131 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0130 20:02:22.284678   28131 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0130 20:02:22.284684   28131 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0130 20:02:22.284690   28131 command_runner.go:130] > # seccomp_profile = ""
	I0130 20:02:22.284696   28131 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0130 20:02:22.284704   28131 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0130 20:02:22.284710   28131 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0130 20:02:22.284717   28131 command_runner.go:130] > # which might increase security.
	I0130 20:02:22.284722   28131 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0130 20:02:22.284731   28131 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0130 20:02:22.284737   28131 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0130 20:02:22.284745   28131 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0130 20:02:22.284752   28131 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0130 20:02:22.284759   28131 command_runner.go:130] > # This option supports live configuration reload.
	I0130 20:02:22.284764   28131 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0130 20:02:22.284772   28131 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0130 20:02:22.284777   28131 command_runner.go:130] > # the cgroup blockio controller.
	I0130 20:02:22.284783   28131 command_runner.go:130] > # blockio_config_file = ""
	I0130 20:02:22.284792   28131 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0130 20:02:22.284799   28131 command_runner.go:130] > # irqbalance daemon.
	I0130 20:02:22.284804   28131 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0130 20:02:22.284811   28131 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0130 20:02:22.284819   28131 command_runner.go:130] > # This option supports live configuration reload.
	I0130 20:02:22.284823   28131 command_runner.go:130] > # rdt_config_file = ""
	I0130 20:02:22.284831   28131 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0130 20:02:22.284835   28131 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0130 20:02:22.284844   28131 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0130 20:02:22.284848   28131 command_runner.go:130] > # separate_pull_cgroup = ""
	I0130 20:02:22.284856   28131 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0130 20:02:22.284868   28131 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0130 20:02:22.284877   28131 command_runner.go:130] > # will be added.
	I0130 20:02:22.284883   28131 command_runner.go:130] > # default_capabilities = [
	I0130 20:02:22.284892   28131 command_runner.go:130] > # 	"CHOWN",
	I0130 20:02:22.284901   28131 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0130 20:02:22.284907   28131 command_runner.go:130] > # 	"FSETID",
	I0130 20:02:22.284915   28131 command_runner.go:130] > # 	"FOWNER",
	I0130 20:02:22.284923   28131 command_runner.go:130] > # 	"SETGID",
	I0130 20:02:22.284927   28131 command_runner.go:130] > # 	"SETUID",
	I0130 20:02:22.284931   28131 command_runner.go:130] > # 	"SETPCAP",
	I0130 20:02:22.284935   28131 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0130 20:02:22.284941   28131 command_runner.go:130] > # 	"KILL",
	I0130 20:02:22.284945   28131 command_runner.go:130] > # ]
	I0130 20:02:22.284953   28131 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0130 20:02:22.284959   28131 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0130 20:02:22.284966   28131 command_runner.go:130] > # default_sysctls = [
	I0130 20:02:22.284969   28131 command_runner.go:130] > # ]
	I0130 20:02:22.284976   28131 command_runner.go:130] > # List of devices on the host that a
	I0130 20:02:22.284982   28131 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0130 20:02:22.284989   28131 command_runner.go:130] > # allowed_devices = [
	I0130 20:02:22.284993   28131 command_runner.go:130] > # 	"/dev/fuse",
	I0130 20:02:22.284996   28131 command_runner.go:130] > # ]
	I0130 20:02:22.285002   28131 command_runner.go:130] > # List of additional devices. specified as
	I0130 20:02:22.285012   28131 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0130 20:02:22.285018   28131 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0130 20:02:22.285046   28131 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0130 20:02:22.285053   28131 command_runner.go:130] > # additional_devices = [
	I0130 20:02:22.285057   28131 command_runner.go:130] > # ]
	I0130 20:02:22.285062   28131 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0130 20:02:22.285068   28131 command_runner.go:130] > # cdi_spec_dirs = [
	I0130 20:02:22.285072   28131 command_runner.go:130] > # 	"/etc/cdi",
	I0130 20:02:22.285076   28131 command_runner.go:130] > # 	"/var/run/cdi",
	I0130 20:02:22.285080   28131 command_runner.go:130] > # ]
	I0130 20:02:22.285086   28131 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0130 20:02:22.285094   28131 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0130 20:02:22.285098   28131 command_runner.go:130] > # Defaults to false.
	I0130 20:02:22.285103   28131 command_runner.go:130] > # device_ownership_from_security_context = false
	I0130 20:02:22.285110   28131 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0130 20:02:22.285137   28131 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0130 20:02:22.285144   28131 command_runner.go:130] > # hooks_dir = [
	I0130 20:02:22.285149   28131 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0130 20:02:22.285155   28131 command_runner.go:130] > # ]
	I0130 20:02:22.285160   28131 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0130 20:02:22.285171   28131 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0130 20:02:22.285179   28131 command_runner.go:130] > # its default mounts from the following two files:
	I0130 20:02:22.285182   28131 command_runner.go:130] > #
	I0130 20:02:22.285196   28131 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0130 20:02:22.285208   28131 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0130 20:02:22.285220   28131 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0130 20:02:22.285229   28131 command_runner.go:130] > #
	I0130 20:02:22.285239   28131 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0130 20:02:22.285251   28131 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0130 20:02:22.285264   28131 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0130 20:02:22.285275   28131 command_runner.go:130] > #      only add mounts it finds in this file.
	I0130 20:02:22.285283   28131 command_runner.go:130] > #
	I0130 20:02:22.285290   28131 command_runner.go:130] > # default_mounts_file = ""
	I0130 20:02:22.285302   28131 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0130 20:02:22.285314   28131 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0130 20:02:22.285322   28131 command_runner.go:130] > pids_limit = 1024
	I0130 20:02:22.285332   28131 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0130 20:02:22.285345   28131 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0130 20:02:22.285363   28131 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0130 20:02:22.285378   28131 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0130 20:02:22.285388   28131 command_runner.go:130] > # log_size_max = -1
	I0130 20:02:22.285395   28131 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0130 20:02:22.285406   28131 command_runner.go:130] > # log_to_journald = false
	I0130 20:02:22.285414   28131 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0130 20:02:22.285419   28131 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0130 20:02:22.285427   28131 command_runner.go:130] > # Path to directory for container attach sockets.
	I0130 20:02:22.285433   28131 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0130 20:02:22.285441   28131 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0130 20:02:22.285445   28131 command_runner.go:130] > # bind_mount_prefix = ""
	I0130 20:02:22.285451   28131 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0130 20:02:22.285456   28131 command_runner.go:130] > # read_only = false
	I0130 20:02:22.285462   28131 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0130 20:02:22.285471   28131 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0130 20:02:22.285476   28131 command_runner.go:130] > # live configuration reload.
	I0130 20:02:22.285480   28131 command_runner.go:130] > # log_level = "info"
	I0130 20:02:22.285487   28131 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0130 20:02:22.285498   28131 command_runner.go:130] > # This option supports live configuration reload.
	I0130 20:02:22.285504   28131 command_runner.go:130] > # log_filter = ""
	I0130 20:02:22.285510   28131 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0130 20:02:22.285518   28131 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0130 20:02:22.285525   28131 command_runner.go:130] > # separated by comma.
	I0130 20:02:22.285530   28131 command_runner.go:130] > # uid_mappings = ""
	I0130 20:02:22.285537   28131 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0130 20:02:22.285545   28131 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0130 20:02:22.285549   28131 command_runner.go:130] > # separated by comma.
	I0130 20:02:22.285555   28131 command_runner.go:130] > # gid_mappings = ""
	I0130 20:02:22.285561   28131 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0130 20:02:22.285570   28131 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0130 20:02:22.285576   28131 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0130 20:02:22.285582   28131 command_runner.go:130] > # minimum_mappable_uid = -1
	I0130 20:02:22.285589   28131 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0130 20:02:22.285596   28131 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0130 20:02:22.285603   28131 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0130 20:02:22.285609   28131 command_runner.go:130] > # minimum_mappable_gid = -1
	I0130 20:02:22.285617   28131 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0130 20:02:22.285629   28131 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0130 20:02:22.285641   28131 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0130 20:02:22.285650   28131 command_runner.go:130] > # ctr_stop_timeout = 30
	I0130 20:02:22.285658   28131 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0130 20:02:22.285669   28131 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0130 20:02:22.285680   28131 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0130 20:02:22.285688   28131 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0130 20:02:22.285698   28131 command_runner.go:130] > drop_infra_ctr = false
	I0130 20:02:22.285705   28131 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0130 20:02:22.285715   28131 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0130 20:02:22.285722   28131 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0130 20:02:22.285729   28131 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0130 20:02:22.285736   28131 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0130 20:02:22.285743   28131 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0130 20:02:22.285747   28131 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0130 20:02:22.285757   28131 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0130 20:02:22.285763   28131 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0130 20:02:22.285775   28131 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0130 20:02:22.285785   28131 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0130 20:02:22.285791   28131 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0130 20:02:22.285930   28131 command_runner.go:130] > # default_runtime = "runc"
	I0130 20:02:22.285957   28131 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0130 20:02:22.285966   28131 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0130 20:02:22.285980   28131 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0130 20:02:22.285993   28131 command_runner.go:130] > # creation as a file is not desired either.
	I0130 20:02:22.286007   28131 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0130 20:02:22.286018   28131 command_runner.go:130] > # the hostname is being managed dynamically.
	I0130 20:02:22.286030   28131 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0130 20:02:22.286039   28131 command_runner.go:130] > # ]
	I0130 20:02:22.286051   28131 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0130 20:02:22.286062   28131 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0130 20:02:22.286076   28131 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0130 20:02:22.286090   28131 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0130 20:02:22.286097   28131 command_runner.go:130] > #
	I0130 20:02:22.286114   28131 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0130 20:02:22.286129   28131 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0130 20:02:22.286140   28131 command_runner.go:130] > #  runtime_type = "oci"
	I0130 20:02:22.286150   28131 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0130 20:02:22.286158   28131 command_runner.go:130] > #  privileged_without_host_devices = false
	I0130 20:02:22.286163   28131 command_runner.go:130] > #  allowed_annotations = []
	I0130 20:02:22.286172   28131 command_runner.go:130] > # Where:
	I0130 20:02:22.286182   28131 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0130 20:02:22.286196   28131 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0130 20:02:22.286210   28131 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0130 20:02:22.286224   28131 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0130 20:02:22.286233   28131 command_runner.go:130] > #   in $PATH.
	I0130 20:02:22.286244   28131 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0130 20:02:22.286254   28131 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0130 20:02:22.286265   28131 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0130 20:02:22.286269   28131 command_runner.go:130] > #   state.
	I0130 20:02:22.286279   28131 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0130 20:02:22.286293   28131 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0130 20:02:22.286308   28131 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0130 20:02:22.286322   28131 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0130 20:02:22.286336   28131 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0130 20:02:22.286349   28131 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0130 20:02:22.286359   28131 command_runner.go:130] > #   The currently recognized values are:
	I0130 20:02:22.286367   28131 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0130 20:02:22.286382   28131 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0130 20:02:22.286396   28131 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0130 20:02:22.286410   28131 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0130 20:02:22.286425   28131 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0130 20:02:22.286438   28131 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0130 20:02:22.286451   28131 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0130 20:02:22.286463   28131 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0130 20:02:22.286472   28131 command_runner.go:130] > #   should be moved to the container's cgroup
	I0130 20:02:22.286484   28131 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0130 20:02:22.286496   28131 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0130 20:02:22.286503   28131 command_runner.go:130] > runtime_type = "oci"
	I0130 20:02:22.286514   28131 command_runner.go:130] > runtime_root = "/run/runc"
	I0130 20:02:22.286525   28131 command_runner.go:130] > runtime_config_path = ""
	I0130 20:02:22.286533   28131 command_runner.go:130] > monitor_path = ""
	I0130 20:02:22.286543   28131 command_runner.go:130] > monitor_cgroup = ""
	I0130 20:02:22.286552   28131 command_runner.go:130] > monitor_exec_cgroup = ""
	I0130 20:02:22.286564   28131 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0130 20:02:22.286571   28131 command_runner.go:130] > # running containers
	I0130 20:02:22.286578   28131 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0130 20:02:22.286593   28131 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0130 20:02:22.286626   28131 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0130 20:02:22.286639   28131 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0130 20:02:22.286652   28131 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0130 20:02:22.286663   28131 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0130 20:02:22.286672   28131 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0130 20:02:22.286683   28131 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0130 20:02:22.286692   28131 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0130 20:02:22.286703   28131 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0130 20:02:22.286712   28131 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0130 20:02:22.286725   28131 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0130 20:02:22.286739   28131 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0130 20:02:22.286756   28131 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0130 20:02:22.286770   28131 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0130 20:02:22.286778   28131 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0130 20:02:22.286788   28131 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0130 20:02:22.286798   28131 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0130 20:02:22.286804   28131 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0130 20:02:22.286813   28131 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0130 20:02:22.286819   28131 command_runner.go:130] > # Example:
	I0130 20:02:22.286824   28131 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0130 20:02:22.286832   28131 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0130 20:02:22.286837   28131 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0130 20:02:22.286844   28131 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0130 20:02:22.286848   28131 command_runner.go:130] > # cpuset = 0
	I0130 20:02:22.286853   28131 command_runner.go:130] > # cpushares = "0-1"
	I0130 20:02:22.286858   28131 command_runner.go:130] > # Where:
	I0130 20:02:22.286865   28131 command_runner.go:130] > # The workload name is workload-type.
	I0130 20:02:22.286872   28131 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0130 20:02:22.286880   28131 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0130 20:02:22.286886   28131 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0130 20:02:22.286896   28131 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0130 20:02:22.286904   28131 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0130 20:02:22.286908   28131 command_runner.go:130] > # 
	I0130 20:02:22.286916   28131 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0130 20:02:22.286920   28131 command_runner.go:130] > #
	I0130 20:02:22.286926   28131 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0130 20:02:22.286932   28131 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0130 20:02:22.286941   28131 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0130 20:02:22.286947   28131 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0130 20:02:22.286958   28131 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0130 20:02:22.286962   28131 command_runner.go:130] > [crio.image]
	I0130 20:02:22.286970   28131 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0130 20:02:22.286975   28131 command_runner.go:130] > # default_transport = "docker://"
	I0130 20:02:22.286983   28131 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0130 20:02:22.286990   28131 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0130 20:02:22.286996   28131 command_runner.go:130] > # global_auth_file = ""
	I0130 20:02:22.287001   28131 command_runner.go:130] > # The image used to instantiate infra containers.
	I0130 20:02:22.287009   28131 command_runner.go:130] > # This option supports live configuration reload.
	I0130 20:02:22.287014   28131 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0130 20:02:22.287023   28131 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0130 20:02:22.287029   28131 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0130 20:02:22.287037   28131 command_runner.go:130] > # This option supports live configuration reload.
	I0130 20:02:22.287042   28131 command_runner.go:130] > # pause_image_auth_file = ""
	I0130 20:02:22.287048   28131 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0130 20:02:22.287054   28131 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0130 20:02:22.287062   28131 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0130 20:02:22.287068   28131 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0130 20:02:22.287074   28131 command_runner.go:130] > # pause_command = "/pause"
	I0130 20:02:22.287081   28131 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0130 20:02:22.287089   28131 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0130 20:02:22.287097   28131 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0130 20:02:22.287105   28131 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0130 20:02:22.287111   28131 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0130 20:02:22.287117   28131 command_runner.go:130] > # signature_policy = ""
	I0130 20:02:22.287123   28131 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0130 20:02:22.287132   28131 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0130 20:02:22.287136   28131 command_runner.go:130] > # changing them here.
	I0130 20:02:22.287140   28131 command_runner.go:130] > # insecure_registries = [
	I0130 20:02:22.287146   28131 command_runner.go:130] > # ]
	I0130 20:02:22.287153   28131 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0130 20:02:22.287160   28131 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0130 20:02:22.287165   28131 command_runner.go:130] > # image_volumes = "mkdir"
	I0130 20:02:22.287173   28131 command_runner.go:130] > # Temporary directory to use for storing big files
	I0130 20:02:22.287177   28131 command_runner.go:130] > # big_files_temporary_dir = ""
	I0130 20:02:22.287186   28131 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0130 20:02:22.287190   28131 command_runner.go:130] > # CNI plugins.
	I0130 20:02:22.287193   28131 command_runner.go:130] > [crio.network]
	I0130 20:02:22.287199   28131 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0130 20:02:22.287206   28131 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0130 20:02:22.287211   28131 command_runner.go:130] > # cni_default_network = ""
	I0130 20:02:22.287219   28131 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0130 20:02:22.287223   28131 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0130 20:02:22.287229   28131 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0130 20:02:22.287235   28131 command_runner.go:130] > # plugin_dirs = [
	I0130 20:02:22.287241   28131 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0130 20:02:22.287245   28131 command_runner.go:130] > # ]
	I0130 20:02:22.287254   28131 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0130 20:02:22.287263   28131 command_runner.go:130] > [crio.metrics]
	I0130 20:02:22.287281   28131 command_runner.go:130] > # Globally enable or disable metrics support.
	I0130 20:02:22.287293   28131 command_runner.go:130] > enable_metrics = true
	I0130 20:02:22.287301   28131 command_runner.go:130] > # Specify enabled metrics collectors.
	I0130 20:02:22.287312   28131 command_runner.go:130] > # Per default all metrics are enabled.
	I0130 20:02:22.287322   28131 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0130 20:02:22.287328   28131 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0130 20:02:22.287336   28131 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0130 20:02:22.287340   28131 command_runner.go:130] > # metrics_collectors = [
	I0130 20:02:22.287345   28131 command_runner.go:130] > # 	"operations",
	I0130 20:02:22.287350   28131 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0130 20:02:22.287356   28131 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0130 20:02:22.287361   28131 command_runner.go:130] > # 	"operations_errors",
	I0130 20:02:22.287368   28131 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0130 20:02:22.287373   28131 command_runner.go:130] > # 	"image_pulls_by_name",
	I0130 20:02:22.287379   28131 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0130 20:02:22.287384   28131 command_runner.go:130] > # 	"image_pulls_failures",
	I0130 20:02:22.287389   28131 command_runner.go:130] > # 	"image_pulls_successes",
	I0130 20:02:22.287393   28131 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0130 20:02:22.287399   28131 command_runner.go:130] > # 	"image_layer_reuse",
	I0130 20:02:22.287403   28131 command_runner.go:130] > # 	"containers_oom_total",
	I0130 20:02:22.287408   28131 command_runner.go:130] > # 	"containers_oom",
	I0130 20:02:22.287412   28131 command_runner.go:130] > # 	"processes_defunct",
	I0130 20:02:22.287417   28131 command_runner.go:130] > # 	"operations_total",
	I0130 20:02:22.287422   28131 command_runner.go:130] > # 	"operations_latency_seconds",
	I0130 20:02:22.287429   28131 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0130 20:02:22.287433   28131 command_runner.go:130] > # 	"operations_errors_total",
	I0130 20:02:22.287440   28131 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0130 20:02:22.287444   28131 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0130 20:02:22.287449   28131 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0130 20:02:22.287455   28131 command_runner.go:130] > # 	"image_pulls_success_total",
	I0130 20:02:22.287459   28131 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0130 20:02:22.287466   28131 command_runner.go:130] > # 	"containers_oom_count_total",
	I0130 20:02:22.287470   28131 command_runner.go:130] > # ]
	I0130 20:02:22.287475   28131 command_runner.go:130] > # The port on which the metrics server will listen.
	I0130 20:02:22.287482   28131 command_runner.go:130] > # metrics_port = 9090
	I0130 20:02:22.287487   28131 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0130 20:02:22.287493   28131 command_runner.go:130] > # metrics_socket = ""
	I0130 20:02:22.287498   28131 command_runner.go:130] > # The certificate for the secure metrics server.
	I0130 20:02:22.287505   28131 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0130 20:02:22.287511   28131 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0130 20:02:22.287518   28131 command_runner.go:130] > # certificate on any modification event.
	I0130 20:02:22.287522   28131 command_runner.go:130] > # metrics_cert = ""
	I0130 20:02:22.287529   28131 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0130 20:02:22.287535   28131 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0130 20:02:22.287539   28131 command_runner.go:130] > # metrics_key = ""
	I0130 20:02:22.287545   28131 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0130 20:02:22.287549   28131 command_runner.go:130] > [crio.tracing]
	I0130 20:02:22.287557   28131 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0130 20:02:22.287561   28131 command_runner.go:130] > # enable_tracing = false
	I0130 20:02:22.287567   28131 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0130 20:02:22.287572   28131 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0130 20:02:22.287579   28131 command_runner.go:130] > # Number of samples to collect per million spans.
	I0130 20:02:22.287584   28131 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0130 20:02:22.287592   28131 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0130 20:02:22.287596   28131 command_runner.go:130] > [crio.stats]
	I0130 20:02:22.287602   28131 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0130 20:02:22.287607   28131 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0130 20:02:22.287615   28131 command_runner.go:130] > # stats_collection_period = 0
	I0130 20:02:22.287704   28131 command_runner.go:130] ! time="2024-01-30 20:02:22.272225709Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0130 20:02:22.287722   28131 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0130 20:02:22.288001   28131 cni.go:84] Creating CNI manager for ""
	I0130 20:02:22.288011   28131 cni.go:136] 3 nodes found, recommending kindnet
	I0130 20:02:22.288019   28131 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:02:22.288041   28131 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.137 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-572652 NodeName:multinode-572652-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:02:22.288167   28131 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-572652-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:02:22.288217   28131 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-572652-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-572652 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:02:22.288263   28131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 20:02:22.296959   28131 command_runner.go:130] > kubeadm
	I0130 20:02:22.296981   28131 command_runner.go:130] > kubectl
	I0130 20:02:22.296987   28131 command_runner.go:130] > kubelet
	I0130 20:02:22.297131   28131 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:02:22.297192   28131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0130 20:02:22.308015   28131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0130 20:02:22.325408   28131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:02:22.343162   28131 ssh_runner.go:195] Run: grep 192.168.39.186	control-plane.minikube.internal$ /etc/hosts
	I0130 20:02:22.347161   28131 command_runner.go:130] > 192.168.39.186	control-plane.minikube.internal
	I0130 20:02:22.347208   28131 host.go:66] Checking if "multinode-572652" exists ...
	I0130 20:02:22.347532   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:02:22.347554   28131 config.go:182] Loaded profile config "multinode-572652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:02:22.347557   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:02:22.362291   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I0130 20:02:22.362663   28131 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:02:22.363115   28131 main.go:141] libmachine: Using API Version  1
	I0130 20:02:22.363137   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:02:22.363455   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:02:22.363626   28131 main.go:141] libmachine: (multinode-572652) Calling .DriverName
	I0130 20:02:22.363755   28131 start.go:304] JoinCluster: &{Name:multinode-572652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-572652 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:02:22.363867   28131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0130 20:02:22.363886   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHHostname
	I0130 20:02:22.366427   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:02:22.366775   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 20:02:22.366808   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:02:22.366944   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHPort
	I0130 20:02:22.367091   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 20:02:22.367228   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHUsername
	I0130 20:02:22.367358   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652/id_rsa Username:docker}
	I0130 20:02:22.531383   28131 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token a1a9d3.vmgcso8h2c2va3ui --discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 
	I0130 20:02:22.531432   28131 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0130 20:02:22.531466   28131 host.go:66] Checking if "multinode-572652" exists ...
	I0130 20:02:22.531823   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:02:22.531856   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:02:22.546271   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38909
	I0130 20:02:22.546699   28131 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:02:22.547227   28131 main.go:141] libmachine: Using API Version  1
	I0130 20:02:22.547253   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:02:22.547649   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:02:22.547824   28131 main.go:141] libmachine: (multinode-572652) Calling .DriverName
	I0130 20:02:22.548035   28131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-572652-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0130 20:02:22.548058   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHHostname
	I0130 20:02:22.550836   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:02:22.551223   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 20:02:22.551261   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:02:22.551428   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHPort
	I0130 20:02:22.551585   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 20:02:22.551742   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHUsername
	I0130 20:02:22.551860   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652/id_rsa Username:docker}
	I0130 20:02:22.709428   28131 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0130 20:02:22.770613   28131 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-w5jvc, kube-system/kube-proxy-rbwvp
	I0130 20:02:25.788162   28131 command_runner.go:130] > node/multinode-572652-m02 cordoned
	I0130 20:02:25.788196   28131 command_runner.go:130] > pod "busybox-5b5d89c9d6-f2vmn" has DeletionTimestamp older than 1 seconds, skipping
	I0130 20:02:25.788206   28131 command_runner.go:130] > node/multinode-572652-m02 drained
	I0130 20:02:25.788232   28131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-572652-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.240174144s)
	I0130 20:02:25.788248   28131 node.go:108] successfully drained node "m02"
	I0130 20:02:25.788573   28131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:02:25.788808   28131 kapi.go:59] client config for multinode-572652: &rest.Config{Host:"https://192.168.39.186:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/client.crt", KeyFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/client.key", CAFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 20:02:25.789180   28131 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0130 20:02:25.789243   28131 round_trippers.go:463] DELETE https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m02
	I0130 20:02:25.789254   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:25.789265   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:25.789274   28131 round_trippers.go:473]     Content-Type: application/json
	I0130 20:02:25.789282   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:25.801309   28131 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0130 20:02:25.801330   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:25.801340   28131 round_trippers.go:580]     Audit-Id: 4a94e5a4-0920-499d-986c-900454c7cba7
	I0130 20:02:25.801347   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:25.801354   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:25.801362   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:25.801370   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:25.801379   28131 round_trippers.go:580]     Content-Length: 171
	I0130 20:02:25.801392   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:25 GMT
	I0130 20:02:25.801556   28131 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-572652-m02","kind":"nodes","uid":"dff06704-3844-4766-a722-a280b6a04c06"}}
	I0130 20:02:25.801601   28131 node.go:124] successfully deleted node "m02"
	I0130 20:02:25.801610   28131 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0130 20:02:25.801630   28131 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0130 20:02:25.801648   28131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token a1a9d3.vmgcso8h2c2va3ui --discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-572652-m02"
	I0130 20:02:25.851091   28131 command_runner.go:130] ! W0130 20:02:25.842835    2673 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0130 20:02:25.851174   28131 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0130 20:02:25.991103   28131 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0130 20:02:25.991130   28131 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0130 20:02:26.771348   28131 command_runner.go:130] > [preflight] Running pre-flight checks
	I0130 20:02:26.771381   28131 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0130 20:02:26.771396   28131 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0130 20:02:26.771408   28131 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 20:02:26.771419   28131 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 20:02:26.771427   28131 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0130 20:02:26.771440   28131 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0130 20:02:26.771456   28131 command_runner.go:130] > This node has joined the cluster:
	I0130 20:02:26.771470   28131 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0130 20:02:26.771482   28131 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0130 20:02:26.771495   28131 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0130 20:02:26.771524   28131 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0130 20:02:27.037870   28131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218 minikube.k8s.io/name=multinode-572652 minikube.k8s.io/updated_at=2024_01_30T20_02_27_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:02:27.152653   28131 command_runner.go:130] > node/multinode-572652-m02 labeled
	I0130 20:02:27.166969   28131 command_runner.go:130] > node/multinode-572652-m03 labeled
	I0130 20:02:27.168683   28131 start.go:306] JoinCluster complete in 4.804932114s
	I0130 20:02:27.168709   28131 cni.go:84] Creating CNI manager for ""
	I0130 20:02:27.168715   28131 cni.go:136] 3 nodes found, recommending kindnet
	I0130 20:02:27.168768   28131 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0130 20:02:27.175162   28131 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0130 20:02:27.175186   28131 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0130 20:02:27.175196   28131 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0130 20:02:27.175207   28131 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0130 20:02:27.175219   28131 command_runner.go:130] > Access: 2024-01-30 19:59:52.571662116 +0000
	I0130 20:02:27.175227   28131 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0130 20:02:27.175242   28131 command_runner.go:130] > Change: 2024-01-30 19:59:50.660662116 +0000
	I0130 20:02:27.175256   28131 command_runner.go:130] >  Birth: -
	I0130 20:02:27.175317   28131 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0130 20:02:27.175331   28131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0130 20:02:27.200788   28131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0130 20:02:27.566176   28131 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0130 20:02:27.566200   28131 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0130 20:02:27.566209   28131 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0130 20:02:27.566217   28131 command_runner.go:130] > daemonset.apps/kindnet configured
	I0130 20:02:27.566533   28131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:02:27.566771   28131 kapi.go:59] client config for multinode-572652: &rest.Config{Host:"https://192.168.39.186:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/client.crt", KeyFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/client.key", CAFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 20:02:27.567073   28131 round_trippers.go:463] GET https://192.168.39.186:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0130 20:02:27.567085   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:27.567100   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:27.567109   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:27.569198   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:02:27.569215   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:27.569224   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:27.569232   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:27.569240   28131 round_trippers.go:580]     Content-Length: 291
	I0130 20:02:27.569248   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:27 GMT
	I0130 20:02:27.569258   28131 round_trippers.go:580]     Audit-Id: 1a732016-6e82-4f56-8894-cb2578099a42
	I0130 20:02:27.569271   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:27.569284   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:27.569311   28131 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2034a0c9-1da9-4b9e-a99f-a32637cca2aa","resourceVersion":"871","creationTimestamp":"2024-01-30T19:50:00Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0130 20:02:27.569390   28131 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-572652" context rescaled to 1 replicas
	I0130 20:02:27.569419   28131 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0130 20:02:27.571325   28131 out.go:177] * Verifying Kubernetes components...
	I0130 20:02:27.572633   28131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:02:27.587862   28131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:02:27.588161   28131 kapi.go:59] client config for multinode-572652: &rest.Config{Host:"https://192.168.39.186:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/client.crt", KeyFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/client.key", CAFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 20:02:27.588456   28131 node_ready.go:35] waiting up to 6m0s for node "multinode-572652-m02" to be "Ready" ...
	I0130 20:02:27.588530   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m02
	I0130 20:02:27.588539   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:27.588552   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:27.588567   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:27.590909   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:02:27.590922   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:27.590928   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:27.590933   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:27.590949   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:27.590958   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:27.590971   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:27 GMT
	I0130 20:02:27.590979   28131 round_trippers.go:580]     Audit-Id: 3bb537e3-8407-4f41-8f9f-73bec93059c0
	I0130 20:02:27.591305   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652-m02","uid":"0044ec35-b13c-4106-b118-c3ac58e05ff0","resourceVersion":"1022","creationTimestamp":"2024-01-30T20:02:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T20_02_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T20:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0130 20:02:27.591557   28131 node_ready.go:49] node "multinode-572652-m02" has status "Ready":"True"
	I0130 20:02:27.591574   28131 node_ready.go:38] duration metric: took 3.097985ms waiting for node "multinode-572652-m02" to be "Ready" ...
	I0130 20:02:27.591585   28131 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:02:27.591641   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods
	I0130 20:02:27.591651   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:27.591662   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:27.591677   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:27.594839   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:02:27.594859   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:27.594868   28131 round_trippers.go:580]     Audit-Id: 62ee56a6-2d08-4e3d-bdc6-c4bca61f4125
	I0130 20:02:27.594877   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:27.594885   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:27.594893   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:27.594904   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:27.594916   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:27 GMT
	I0130 20:02:27.595787   28131 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1029"},"items":[{"metadata":{"name":"coredns-5dd5756b68-579fc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8ed4a94c-417c-480d-9f9a-4101a5103066","resourceVersion":"850","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"39fdf010-d57e-4327-975b-6a5e640212c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fdf010-d57e-4327-975b-6a5e640212c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82238 chars]
	I0130 20:02:27.598246   28131 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-579fc" in "kube-system" namespace to be "Ready" ...
	I0130 20:02:27.598306   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-579fc
	I0130 20:02:27.598314   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:27.598321   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:27.598327   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:27.600248   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:02:27.600269   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:27.600279   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:27.600287   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:27 GMT
	I0130 20:02:27.600294   28131 round_trippers.go:580]     Audit-Id: 4069a687-26e5-4da5-979e-f09f220132f4
	I0130 20:02:27.600301   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:27.600309   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:27.600320   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:27.600481   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-579fc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8ed4a94c-417c-480d-9f9a-4101a5103066","resourceVersion":"850","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"39fdf010-d57e-4327-975b-6a5e640212c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fdf010-d57e-4327-975b-6a5e640212c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0130 20:02:27.600835   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:02:27.600846   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:27.600853   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:27.600860   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:27.602649   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:02:27.602667   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:27.602676   28131 round_trippers.go:580]     Audit-Id: 1b74039c-1795-45ca-bd98-86e634fbe29c
	I0130 20:02:27.602685   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:27.602693   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:27.602701   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:27.602713   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:27.602727   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:27 GMT
	I0130 20:02:27.603031   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"886","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 20:02:27.603296   28131 pod_ready.go:92] pod "coredns-5dd5756b68-579fc" in "kube-system" namespace has status "Ready":"True"
	I0130 20:02:27.603311   28131 pod_ready.go:81] duration metric: took 5.044678ms waiting for pod "coredns-5dd5756b68-579fc" in "kube-system" namespace to be "Ready" ...
	I0130 20:02:27.603322   28131 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:02:27.603363   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-572652
	I0130 20:02:27.603369   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:27.603377   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:27.603386   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:27.605247   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:02:27.605265   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:27.605273   28131 round_trippers.go:580]     Audit-Id: 210a3dd7-7e2f-4348-a168-449235734aad
	I0130 20:02:27.605281   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:27.605288   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:27.605296   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:27.605306   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:27.605318   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:27 GMT
	I0130 20:02:27.605478   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-572652","namespace":"kube-system","uid":"e44ed93f-1c85-4d27-bacb-f454d6eaa0b6","resourceVersion":"857","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.186:2379","kubernetes.io/config.hash":"3d195cc1c68274636debff677374c054","kubernetes.io/config.mirror":"3d195cc1c68274636debff677374c054","kubernetes.io/config.seen":"2024-01-30T19:50:00.428284843Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0130 20:02:27.605760   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:02:27.605769   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:27.605776   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:27.605782   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:27.607453   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:02:27.607471   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:27.607480   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:27 GMT
	I0130 20:02:27.607489   28131 round_trippers.go:580]     Audit-Id: b759cbc0-f0a5-4ee3-8fc6-55d00bd4a43f
	I0130 20:02:27.607497   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:27.607507   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:27.607515   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:27.607525   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:27.607664   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"886","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 20:02:27.607907   28131 pod_ready.go:92] pod "etcd-multinode-572652" in "kube-system" namespace has status "Ready":"True"
	I0130 20:02:27.607919   28131 pod_ready.go:81] duration metric: took 4.588034ms waiting for pod "etcd-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:02:27.607933   28131 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:02:27.607976   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-572652
	I0130 20:02:27.607983   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:27.607991   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:27.607996   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:27.609760   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:02:27.609779   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:27.609788   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:27.609796   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:27.609804   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:27 GMT
	I0130 20:02:27.609812   28131 round_trippers.go:580]     Audit-Id: 7b245446-8d44-4c6a-a70e-528da5a863a4
	I0130 20:02:27.609820   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:27.609829   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:27.610100   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-572652","namespace":"kube-system","uid":"fc451607-277c-45fe-a0f9-a3502db0251b","resourceVersion":"863","creationTimestamp":"2024-01-30T19:49:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.186:8443","kubernetes.io/config.hash":"d6f18dcbbdea790709196864d2f77f8b","kubernetes.io/config.mirror":"d6f18dcbbdea790709196864d2f77f8b","kubernetes.io/config.seen":"2024-01-30T19:49:51.352745901Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:49:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0130 20:02:27.610535   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:02:27.610554   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:27.610565   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:27.610574   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:27.612459   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:02:27.612476   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:27.612485   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:27.612495   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:27.612503   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:27.612508   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:27 GMT
	I0130 20:02:27.612514   28131 round_trippers.go:580]     Audit-Id: 4b9cca7a-3f96-4ed4-a204-0c8a77fc0157
	I0130 20:02:27.612519   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:27.612674   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"886","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 20:02:27.613063   28131 pod_ready.go:92] pod "kube-apiserver-multinode-572652" in "kube-system" namespace has status "Ready":"True"
	I0130 20:02:27.613084   28131 pod_ready.go:81] duration metric: took 5.145111ms waiting for pod "kube-apiserver-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:02:27.613096   28131 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:02:27.613152   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-572652
	I0130 20:02:27.613163   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:27.613174   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:27.613184   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:27.615292   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:02:27.615309   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:27.615315   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:27.615320   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:27.615325   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:27.615331   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:27.615336   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:27 GMT
	I0130 20:02:27.615341   28131 round_trippers.go:580]     Audit-Id: 30bc33ed-a04e-4660-ae0c-85c3ccc14d58
	I0130 20:02:27.615582   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-572652","namespace":"kube-system","uid":"ce85a6a9-3600-41a9-824a-d01c009aead2","resourceVersion":"877","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c7787439db55e175a329eec0f92a7a11","kubernetes.io/config.mirror":"c7787439db55e175a329eec0f92a7a11","kubernetes.io/config.seen":"2024-01-30T19:50:00.428289181Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0130 20:02:27.615993   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:02:27.616007   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:27.616018   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:27.616028   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:27.618123   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:02:27.618141   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:27.618151   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:27 GMT
	I0130 20:02:27.618160   28131 round_trippers.go:580]     Audit-Id: d3b613ad-dd1e-4bad-83aa-f2d06a5def6c
	I0130 20:02:27.618168   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:27.618176   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:27.618184   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:27.618195   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:27.618392   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"886","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 20:02:27.618745   28131 pod_ready.go:92] pod "kube-controller-manager-multinode-572652" in "kube-system" namespace has status "Ready":"True"
	I0130 20:02:27.618760   28131 pod_ready.go:81] duration metric: took 5.655951ms waiting for pod "kube-controller-manager-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:02:27.618770   28131 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hx9f7" in "kube-system" namespace to be "Ready" ...
	I0130 20:02:27.789189   28131 request.go:629] Waited for 170.337021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hx9f7
	I0130 20:02:27.789260   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hx9f7
	I0130 20:02:27.789271   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:27.789281   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:27.789292   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:27.792562   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:02:27.792586   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:27.792597   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:27.792605   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:27 GMT
	I0130 20:02:27.792613   28131 round_trippers.go:580]     Audit-Id: f4e77df9-5b02-4669-94d1-e098cbc7309b
	I0130 20:02:27.792628   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:27.792637   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:27.792645   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:27.793124   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hx9f7","generateName":"kube-proxy-","namespace":"kube-system","uid":"95d8777b-0e61-4662-a7a6-1fb5e7b4ae29","resourceVersion":"773","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1e1c3365-a3ba-434b-96dd-44f8afef011c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e1c3365-a3ba-434b-96dd-44f8afef011c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0130 20:02:27.989056   28131 request.go:629] Waited for 195.413565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:02:27.989148   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:02:27.989158   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:27.989169   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:27.989182   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:27.991776   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:02:27.991797   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:27.991803   28131 round_trippers.go:580]     Audit-Id: bff7383b-a041-4239-b1a2-2ddc35fb8b76
	I0130 20:02:27.991809   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:27.991814   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:27.991819   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:27.991824   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:27.991841   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:27 GMT
	I0130 20:02:27.992263   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"886","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 20:02:27.992677   28131 pod_ready.go:92] pod "kube-proxy-hx9f7" in "kube-system" namespace has status "Ready":"True"
	I0130 20:02:27.992699   28131 pod_ready.go:81] duration metric: took 373.921625ms waiting for pod "kube-proxy-hx9f7" in "kube-system" namespace to be "Ready" ...
	I0130 20:02:27.992711   28131 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j5sr4" in "kube-system" namespace to be "Ready" ...
	I0130 20:02:28.188611   28131 request.go:629] Waited for 195.837354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5sr4
	I0130 20:02:28.188674   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5sr4
	I0130 20:02:28.188680   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:28.188709   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:28.188728   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:28.192587   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:02:28.192608   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:28.192617   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:28.192624   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:28.192632   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:28.192640   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:28.192648   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:28 GMT
	I0130 20:02:28.192657   28131 round_trippers.go:580]     Audit-Id: d9c11e99-e186-4c2c-b7b7-e6c07a6d5ef7
	I0130 20:02:28.192899   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j5sr4","generateName":"kube-proxy-","namespace":"kube-system","uid":"d6bacfbc-c1e8-4dd2-bd48-778725887a72","resourceVersion":"699","creationTimestamp":"2024-01-30T19:51:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1e1c3365-a3ba-434b-96dd-44f8afef011c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e1c3365-a3ba-434b-96dd-44f8afef011c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0130 20:02:28.388735   28131 request.go:629] Waited for 195.286628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m03
	I0130 20:02:28.388803   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m03
	I0130 20:02:28.388809   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:28.388820   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:28.388832   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:28.391614   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:02:28.391647   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:28.391658   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:28.391665   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:28 GMT
	I0130 20:02:28.391673   28131 round_trippers.go:580]     Audit-Id: ddc27103-e3ee-4cf7-b582-928d63f9f346
	I0130 20:02:28.391681   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:28.391690   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:28.391702   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:28.392108   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652-m03","uid":"6e43dfc4-d01d-44de-b61c-e668bf1447ff","resourceVersion":"1034","creationTimestamp":"2024-01-30T19:52:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T20_02_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:52:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 4236 chars]
	I0130 20:02:28.392363   28131 pod_ready.go:92] pod "kube-proxy-j5sr4" in "kube-system" namespace has status "Ready":"True"
	I0130 20:02:28.392378   28131 pod_ready.go:81] duration metric: took 399.659227ms waiting for pod "kube-proxy-j5sr4" in "kube-system" namespace to be "Ready" ...
	I0130 20:02:28.392388   28131 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rbwvp" in "kube-system" namespace to be "Ready" ...
	I0130 20:02:28.589557   28131 request.go:629] Waited for 197.0962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rbwvp
	I0130 20:02:28.589623   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rbwvp
	I0130 20:02:28.589631   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:28.589641   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:28.589654   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:28.592188   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:02:28.592214   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:28.592222   28131 round_trippers.go:580]     Audit-Id: bf7de5fa-974a-4cb2-8ad9-b99ba585c978
	I0130 20:02:28.592231   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:28.592236   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:28.592242   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:28.592247   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:28.592255   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:28 GMT
	I0130 20:02:28.592394   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rbwvp","generateName":"kube-proxy-","namespace":"kube-system","uid":"2cd3c663-bf55-49b2-9120-101ac59912fd","resourceVersion":"1042","creationTimestamp":"2024-01-30T19:50:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1e1c3365-a3ba-434b-96dd-44f8afef011c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e1c3365-a3ba-434b-96dd-44f8afef011c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0130 20:02:28.789158   28131 request.go:629] Waited for 196.358251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m02
	I0130 20:02:28.789242   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m02
	I0130 20:02:28.789255   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:28.789266   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:28.789277   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:28.792008   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:02:28.792034   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:28.792045   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:28 GMT
	I0130 20:02:28.792053   28131 round_trippers.go:580]     Audit-Id: f113e708-dd6c-4cfe-8569-a3d76ff96e16
	I0130 20:02:28.792061   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:28.792069   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:28.792079   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:28.792088   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:28.792274   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652-m02","uid":"0044ec35-b13c-4106-b118-c3ac58e05ff0","resourceVersion":"1022","creationTimestamp":"2024-01-30T20:02:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T20_02_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T20:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0130 20:02:28.792577   28131 pod_ready.go:92] pod "kube-proxy-rbwvp" in "kube-system" namespace has status "Ready":"True"
	I0130 20:02:28.792595   28131 pod_ready.go:81] duration metric: took 400.198944ms waiting for pod "kube-proxy-rbwvp" in "kube-system" namespace to be "Ready" ...
	I0130 20:02:28.792605   28131 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:02:28.989573   28131 request.go:629] Waited for 196.901131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-572652
	I0130 20:02:28.989654   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-572652
	I0130 20:02:28.989665   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:28.989678   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:28.989689   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:28.992735   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:02:28.992751   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:28.992760   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:28.992768   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:28.992775   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:28 GMT
	I0130 20:02:28.992783   28131 round_trippers.go:580]     Audit-Id: 497390cc-9708-4e50-b60b-fbe61da18239
	I0130 20:02:28.992798   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:28.992814   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:28.993277   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-572652","namespace":"kube-system","uid":"ee4d8608-40cb-4281-ac1f-bc5ac41ff27d","resourceVersion":"855","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"85e85fa7283981ab3a029cbc7c4cbcc1","kubernetes.io/config.mirror":"85e85fa7283981ab3a029cbc7c4cbcc1","kubernetes.io/config.seen":"2024-01-30T19:50:00.428289879Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0130 20:02:29.189034   28131 request.go:629] Waited for 195.368576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:02:29.189108   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:02:29.189114   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:29.189122   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:29.189131   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:29.191782   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:02:29.191801   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:29.191811   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:29.191818   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:29.191823   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:29.191829   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:29 GMT
	I0130 20:02:29.191837   28131 round_trippers.go:580]     Audit-Id: d4b8e290-ee79-4d9f-9309-481520aab260
	I0130 20:02:29.191842   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:29.192082   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"886","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 20:02:29.192483   28131 pod_ready.go:92] pod "kube-scheduler-multinode-572652" in "kube-system" namespace has status "Ready":"True"
	I0130 20:02:29.192509   28131 pod_ready.go:81] duration metric: took 399.883412ms waiting for pod "kube-scheduler-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:02:29.192523   28131 pod_ready.go:38] duration metric: took 1.600922793s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:02:29.192541   28131 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:02:29.192596   28131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:02:29.205743   28131 system_svc.go:56] duration metric: took 13.194143ms WaitForService to wait for kubelet.
	I0130 20:02:29.205765   28131 kubeadm.go:581] duration metric: took 1.63631952s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:02:29.205785   28131 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:02:29.389197   28131 request.go:629] Waited for 183.332141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes
	I0130 20:02:29.389254   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes
	I0130 20:02:29.389262   28131 round_trippers.go:469] Request Headers:
	I0130 20:02:29.389274   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:02:29.389287   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:02:29.393153   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:02:29.393190   28131 round_trippers.go:577] Response Headers:
	I0130 20:02:29.393200   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:02:29 GMT
	I0130 20:02:29.393209   28131 round_trippers.go:580]     Audit-Id: 48d37f51-f1a3-4dee-8cdb-3f4a2a8f8004
	I0130 20:02:29.393218   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:02:29.393230   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:02:29.393240   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:02:29.393252   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:02:29.393501   28131 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1047"},"items":[{"metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"886","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16481 chars]
	I0130 20:02:29.394141   28131 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:02:29.394161   28131 node_conditions.go:123] node cpu capacity is 2
	I0130 20:02:29.394170   28131 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:02:29.394174   28131 node_conditions.go:123] node cpu capacity is 2
	I0130 20:02:29.394178   28131 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:02:29.394182   28131 node_conditions.go:123] node cpu capacity is 2
	I0130 20:02:29.394186   28131 node_conditions.go:105] duration metric: took 188.396278ms to run NodePressure ...
	I0130 20:02:29.394197   28131 start.go:228] waiting for startup goroutines ...
	I0130 20:02:29.394215   28131 start.go:242] writing updated cluster config ...
	I0130 20:02:29.394607   28131 config.go:182] Loaded profile config "multinode-572652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:02:29.394687   28131 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/config.json ...
	I0130 20:02:29.397569   28131 out.go:177] * Starting worker node multinode-572652-m03 in cluster multinode-572652
	I0130 20:02:29.398781   28131 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 20:02:29.398798   28131 cache.go:56] Caching tarball of preloaded images
	I0130 20:02:29.398883   28131 preload.go:174] Found /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 20:02:29.398906   28131 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0130 20:02:29.399025   28131 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/config.json ...
	I0130 20:02:29.399231   28131 start.go:365] acquiring machines lock for multinode-572652-m03: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 20:02:29.399297   28131 start.go:369] acquired machines lock for "multinode-572652-m03" in 41.162µs
	I0130 20:02:29.399310   28131 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:02:29.399317   28131 fix.go:54] fixHost starting: m03
	I0130 20:02:29.399561   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:02:29.399582   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:02:29.413624   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I0130 20:02:29.414016   28131 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:02:29.414405   28131 main.go:141] libmachine: Using API Version  1
	I0130 20:02:29.414422   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:02:29.414688   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:02:29.414849   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .DriverName
	I0130 20:02:29.414969   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetState
	I0130 20:02:29.416510   28131 fix.go:102] recreateIfNeeded on multinode-572652-m03: state=Running err=<nil>
	W0130 20:02:29.416528   28131 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:02:29.418269   28131 out.go:177] * Updating the running kvm2 "multinode-572652-m03" VM ...
	I0130 20:02:29.419487   28131 machine.go:88] provisioning docker machine ...
	I0130 20:02:29.419503   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .DriverName
	I0130 20:02:29.419684   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetMachineName
	I0130 20:02:29.419830   28131 buildroot.go:166] provisioning hostname "multinode-572652-m03"
	I0130 20:02:29.419851   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetMachineName
	I0130 20:02:29.420007   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHHostname
	I0130 20:02:29.422247   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:02:29.422637   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:a6:6c", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:51:28 +0000 UTC Type:0 Mac:52:54:00:26:a6:6c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-572652-m03 Clientid:01:52:54:00:26:a6:6c}
	I0130 20:02:29.422661   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined IP address 192.168.39.58 and MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:02:29.422798   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHPort
	I0130 20:02:29.422966   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHKeyPath
	I0130 20:02:29.423114   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHKeyPath
	I0130 20:02:29.423247   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHUsername
	I0130 20:02:29.423430   28131 main.go:141] libmachine: Using SSH client type: native
	I0130 20:02:29.423780   28131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0130 20:02:29.423795   28131 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-572652-m03 && echo "multinode-572652-m03" | sudo tee /etc/hostname
	I0130 20:02:29.570846   28131 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-572652-m03
	
	I0130 20:02:29.570874   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHHostname
	I0130 20:02:29.573885   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:02:29.574191   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:a6:6c", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:51:28 +0000 UTC Type:0 Mac:52:54:00:26:a6:6c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-572652-m03 Clientid:01:52:54:00:26:a6:6c}
	I0130 20:02:29.574214   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined IP address 192.168.39.58 and MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:02:29.574374   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHPort
	I0130 20:02:29.574548   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHKeyPath
	I0130 20:02:29.574717   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHKeyPath
	I0130 20:02:29.574866   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHUsername
	I0130 20:02:29.574993   28131 main.go:141] libmachine: Using SSH client type: native
	I0130 20:02:29.575310   28131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0130 20:02:29.575327   28131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-572652-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-572652-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-572652-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:02:29.707908   28131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:02:29.707948   28131 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:02:29.707969   28131 buildroot.go:174] setting up certificates
	I0130 20:02:29.707980   28131 provision.go:83] configureAuth start
	I0130 20:02:29.707994   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetMachineName
	I0130 20:02:29.708247   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetIP
	I0130 20:02:29.710939   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:02:29.711296   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:a6:6c", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:51:28 +0000 UTC Type:0 Mac:52:54:00:26:a6:6c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-572652-m03 Clientid:01:52:54:00:26:a6:6c}
	I0130 20:02:29.711326   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined IP address 192.168.39.58 and MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:02:29.711449   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHHostname
	I0130 20:02:29.713624   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:02:29.713950   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:a6:6c", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:51:28 +0000 UTC Type:0 Mac:52:54:00:26:a6:6c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-572652-m03 Clientid:01:52:54:00:26:a6:6c}
	I0130 20:02:29.713991   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined IP address 192.168.39.58 and MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:02:29.714068   28131 provision.go:138] copyHostCerts
	I0130 20:02:29.714097   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:02:29.714133   28131 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:02:29.714144   28131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:02:29.714211   28131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:02:29.714304   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:02:29.714329   28131 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:02:29.714339   28131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:02:29.714396   28131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:02:29.714467   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:02:29.714490   28131 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:02:29.714499   28131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:02:29.714528   28131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:02:29.714590   28131 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.multinode-572652-m03 san=[192.168.39.58 192.168.39.58 localhost 127.0.0.1 minikube multinode-572652-m03]
	I0130 20:02:30.045386   28131 provision.go:172] copyRemoteCerts
	I0130 20:02:30.045437   28131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:02:30.045458   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHHostname
	I0130 20:02:30.048248   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:02:30.048618   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:a6:6c", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:51:28 +0000 UTC Type:0 Mac:52:54:00:26:a6:6c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-572652-m03 Clientid:01:52:54:00:26:a6:6c}
	I0130 20:02:30.048642   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined IP address 192.168.39.58 and MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:02:30.048820   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHPort
	I0130 20:02:30.049005   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHKeyPath
	I0130 20:02:30.049146   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHUsername
	I0130 20:02:30.049303   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652-m03/id_rsa Username:docker}
	I0130 20:02:30.145777   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0130 20:02:30.145856   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:02:30.167587   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0130 20:02:30.167660   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0130 20:02:30.188761   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0130 20:02:30.188830   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 20:02:30.210756   28131 provision.go:86] duration metric: configureAuth took 502.760575ms
	I0130 20:02:30.210789   28131 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:02:30.211007   28131 config.go:182] Loaded profile config "multinode-572652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:02:30.211072   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHHostname
	I0130 20:02:30.213524   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:02:30.213835   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:a6:6c", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:51:28 +0000 UTC Type:0 Mac:52:54:00:26:a6:6c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-572652-m03 Clientid:01:52:54:00:26:a6:6c}
	I0130 20:02:30.213867   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined IP address 192.168.39.58 and MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:02:30.213997   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHPort
	I0130 20:02:30.214190   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHKeyPath
	I0130 20:02:30.214334   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHKeyPath
	I0130 20:02:30.214487   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHUsername
	I0130 20:02:30.214693   28131 main.go:141] libmachine: Using SSH client type: native
	I0130 20:02:30.214988   28131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0130 20:02:30.215003   28131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:04:00.691054   28131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:04:00.691089   28131 machine.go:91] provisioned docker machine in 1m31.271590406s
	I0130 20:04:00.691103   28131 start.go:300] post-start starting for "multinode-572652-m03" (driver="kvm2")
	I0130 20:04:00.691116   28131 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:04:00.691139   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .DriverName
	I0130 20:04:00.691498   28131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:04:00.691537   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHHostname
	I0130 20:04:00.694648   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:04:00.695045   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:a6:6c", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:51:28 +0000 UTC Type:0 Mac:52:54:00:26:a6:6c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-572652-m03 Clientid:01:52:54:00:26:a6:6c}
	I0130 20:04:00.695079   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined IP address 192.168.39.58 and MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:04:00.695291   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHPort
	I0130 20:04:00.695493   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHKeyPath
	I0130 20:04:00.695647   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHUsername
	I0130 20:04:00.695809   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652-m03/id_rsa Username:docker}
	I0130 20:04:00.794195   28131 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:04:00.798811   28131 command_runner.go:130] > NAME=Buildroot
	I0130 20:04:00.798835   28131 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0130 20:04:00.798843   28131 command_runner.go:130] > ID=buildroot
	I0130 20:04:00.798849   28131 command_runner.go:130] > VERSION_ID=2021.02.12
	I0130 20:04:00.798855   28131 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0130 20:04:00.799002   28131 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:04:00.799021   28131 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:04:00.799081   28131 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:04:00.799163   28131 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:04:00.799174   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> /etc/ssl/certs/116672.pem
	I0130 20:04:00.799303   28131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:04:00.807054   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:04:00.829028   28131 start.go:303] post-start completed in 137.914967ms
	I0130 20:04:00.829045   28131 fix.go:56] fixHost completed within 1m31.42972744s
	I0130 20:04:00.829063   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHHostname
	I0130 20:04:00.831638   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:04:00.832078   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:a6:6c", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:51:28 +0000 UTC Type:0 Mac:52:54:00:26:a6:6c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-572652-m03 Clientid:01:52:54:00:26:a6:6c}
	I0130 20:04:00.832105   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined IP address 192.168.39.58 and MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:04:00.832293   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHPort
	I0130 20:04:00.832471   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHKeyPath
	I0130 20:04:00.832653   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHKeyPath
	I0130 20:04:00.832797   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHUsername
	I0130 20:04:00.832969   28131 main.go:141] libmachine: Using SSH client type: native
	I0130 20:04:00.833258   28131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0130 20:04:00.833269   28131 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:04:00.963821   28131 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706645040.958419001
	
	I0130 20:04:00.963849   28131 fix.go:206] guest clock: 1706645040.958419001
	I0130 20:04:00.963857   28131 fix.go:219] Guest: 2024-01-30 20:04:00.958419001 +0000 UTC Remote: 2024-01-30 20:04:00.829049626 +0000 UTC m=+559.424757768 (delta=129.369375ms)
	I0130 20:04:00.963873   28131 fix.go:190] guest clock delta is within tolerance: 129.369375ms
	I0130 20:04:00.963877   28131 start.go:83] releasing machines lock for "multinode-572652-m03", held for 1m31.564572791s
	I0130 20:04:00.963895   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .DriverName
	I0130 20:04:00.964132   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetIP
	I0130 20:04:00.966913   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:04:00.967255   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:a6:6c", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:51:28 +0000 UTC Type:0 Mac:52:54:00:26:a6:6c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-572652-m03 Clientid:01:52:54:00:26:a6:6c}
	I0130 20:04:00.967301   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined IP address 192.168.39.58 and MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:04:00.969379   28131 out.go:177] * Found network options:
	I0130 20:04:00.971011   28131 out.go:177]   - NO_PROXY=192.168.39.186,192.168.39.137
	W0130 20:04:00.972409   28131 proxy.go:119] fail to check proxy env: Error ip not in block
	W0130 20:04:00.972435   28131 proxy.go:119] fail to check proxy env: Error ip not in block
	I0130 20:04:00.972447   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .DriverName
	I0130 20:04:00.973024   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .DriverName
	I0130 20:04:00.973213   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .DriverName
	I0130 20:04:00.973299   28131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:04:00.973331   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHHostname
	W0130 20:04:00.973416   28131 proxy.go:119] fail to check proxy env: Error ip not in block
	W0130 20:04:00.973441   28131 proxy.go:119] fail to check proxy env: Error ip not in block
	I0130 20:04:00.973506   28131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:04:00.973529   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHHostname
	I0130 20:04:00.976014   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:04:00.976365   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:04:00.976408   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:a6:6c", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:51:28 +0000 UTC Type:0 Mac:52:54:00:26:a6:6c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-572652-m03 Clientid:01:52:54:00:26:a6:6c}
	I0130 20:04:00.976443   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined IP address 192.168.39.58 and MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:04:00.976566   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHPort
	I0130 20:04:00.976733   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHKeyPath
	I0130 20:04:00.976823   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:a6:6c", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:51:28 +0000 UTC Type:0 Mac:52:54:00:26:a6:6c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-572652-m03 Clientid:01:52:54:00:26:a6:6c}
	I0130 20:04:00.976858   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined IP address 192.168.39.58 and MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:04:00.976888   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHUsername
	I0130 20:04:00.977040   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHPort
	I0130 20:04:00.977055   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652-m03/id_rsa Username:docker}
	I0130 20:04:00.977222   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHKeyPath
	I0130 20:04:00.977366   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetSSHUsername
	I0130 20:04:00.977546   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652-m03/id_rsa Username:docker}
	I0130 20:04:01.212711   28131 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0130 20:04:01.212749   28131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0130 20:04:01.218501   28131 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0130 20:04:01.218756   28131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:04:01.218820   28131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:04:01.226842   28131 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0130 20:04:01.226863   28131 start.go:475] detecting cgroup driver to use...
	I0130 20:04:01.226920   28131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:04:01.240379   28131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:04:01.252617   28131 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:04:01.252669   28131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:04:01.266276   28131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:04:01.278553   28131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:04:01.414151   28131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:04:01.543708   28131 docker.go:233] disabling docker service ...
	I0130 20:04:01.543785   28131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:04:01.557318   28131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:04:01.569780   28131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:04:01.808098   28131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:04:01.956996   28131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:04:01.971278   28131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:04:01.991815   28131 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0130 20:04:01.991857   28131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:04:01.991898   28131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:04:02.004261   28131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:04:02.004315   28131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:04:02.014490   28131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:04:02.024144   28131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:04:02.033584   28131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:04:02.042802   28131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:04:02.050961   28131 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0130 20:04:02.051156   28131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:04:02.059513   28131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:04:02.213755   28131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:04:04.863139   28131 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.649343366s)
	I0130 20:04:04.863164   28131 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:04:04.863206   28131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:04:04.873269   28131 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0130 20:04:04.873291   28131 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0130 20:04:04.873299   28131 command_runner.go:130] > Device: 16h/22d	Inode: 1195        Links: 1
	I0130 20:04:04.873306   28131 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0130 20:04:04.873311   28131 command_runner.go:130] > Access: 2024-01-30 20:04:04.758999798 +0000
	I0130 20:04:04.873317   28131 command_runner.go:130] > Modify: 2024-01-30 20:04:04.758999798 +0000
	I0130 20:04:04.873323   28131 command_runner.go:130] > Change: 2024-01-30 20:04:04.758999798 +0000
	I0130 20:04:04.873329   28131 command_runner.go:130] >  Birth: -
	I0130 20:04:04.873424   28131 start.go:543] Will wait 60s for crictl version
	I0130 20:04:04.873467   28131 ssh_runner.go:195] Run: which crictl
	I0130 20:04:04.877243   28131 command_runner.go:130] > /usr/bin/crictl
	I0130 20:04:04.877536   28131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:04:04.914549   28131 command_runner.go:130] > Version:  0.1.0
	I0130 20:04:04.914577   28131 command_runner.go:130] > RuntimeName:  cri-o
	I0130 20:04:04.914585   28131 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0130 20:04:04.914594   28131 command_runner.go:130] > RuntimeApiVersion:  v1
	I0130 20:04:04.914613   28131 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:04:04.914679   28131 ssh_runner.go:195] Run: crio --version
	I0130 20:04:04.966248   28131 command_runner.go:130] > crio version 1.24.1
	I0130 20:04:04.966268   28131 command_runner.go:130] > Version:          1.24.1
	I0130 20:04:04.966276   28131 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0130 20:04:04.966283   28131 command_runner.go:130] > GitTreeState:     dirty
	I0130 20:04:04.966293   28131 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0130 20:04:04.966301   28131 command_runner.go:130] > GoVersion:        go1.19.9
	I0130 20:04:04.966308   28131 command_runner.go:130] > Compiler:         gc
	I0130 20:04:04.966314   28131 command_runner.go:130] > Platform:         linux/amd64
	I0130 20:04:04.966320   28131 command_runner.go:130] > Linkmode:         dynamic
	I0130 20:04:04.966327   28131 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0130 20:04:04.966331   28131 command_runner.go:130] > SeccompEnabled:   true
	I0130 20:04:04.966335   28131 command_runner.go:130] > AppArmorEnabled:  false
	I0130 20:04:04.966512   28131 ssh_runner.go:195] Run: crio --version
	I0130 20:04:05.017897   28131 command_runner.go:130] > crio version 1.24.1
	I0130 20:04:05.017921   28131 command_runner.go:130] > Version:          1.24.1
	I0130 20:04:05.017942   28131 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0130 20:04:05.017950   28131 command_runner.go:130] > GitTreeState:     dirty
	I0130 20:04:05.017958   28131 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0130 20:04:05.017963   28131 command_runner.go:130] > GoVersion:        go1.19.9
	I0130 20:04:05.017967   28131 command_runner.go:130] > Compiler:         gc
	I0130 20:04:05.017972   28131 command_runner.go:130] > Platform:         linux/amd64
	I0130 20:04:05.017980   28131 command_runner.go:130] > Linkmode:         dynamic
	I0130 20:04:05.017990   28131 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0130 20:04:05.017995   28131 command_runner.go:130] > SeccompEnabled:   true
	I0130 20:04:05.018002   28131 command_runner.go:130] > AppArmorEnabled:  false
	I0130 20:04:05.019962   28131 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 20:04:05.021355   28131 out.go:177]   - env NO_PROXY=192.168.39.186
	I0130 20:04:05.022738   28131 out.go:177]   - env NO_PROXY=192.168.39.186,192.168.39.137
	I0130 20:04:05.023937   28131 main.go:141] libmachine: (multinode-572652-m03) Calling .GetIP
	I0130 20:04:05.026717   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:04:05.027111   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:a6:6c", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:51:28 +0000 UTC Type:0 Mac:52:54:00:26:a6:6c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-572652-m03 Clientid:01:52:54:00:26:a6:6c}
	I0130 20:04:05.027142   28131 main.go:141] libmachine: (multinode-572652-m03) DBG | domain multinode-572652-m03 has defined IP address 192.168.39.58 and MAC address 52:54:00:26:a6:6c in network mk-multinode-572652
	I0130 20:04:05.027344   28131 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 20:04:05.032282   28131 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0130 20:04:05.032549   28131 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652 for IP: 192.168.39.58
	I0130 20:04:05.032570   28131 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:04:05.032723   28131 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:04:05.032777   28131 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:04:05.032792   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0130 20:04:05.032811   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0130 20:04:05.032826   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0130 20:04:05.032844   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0130 20:04:05.032912   28131 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:04:05.032957   28131 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:04:05.032973   28131 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:04:05.033010   28131 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:04:05.033042   28131 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:04:05.033075   28131 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:04:05.033132   28131 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:04:05.033165   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> /usr/share/ca-certificates/116672.pem
	I0130 20:04:05.033181   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:04:05.033199   28131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem -> /usr/share/ca-certificates/11667.pem
	I0130 20:04:05.033551   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:04:05.058308   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:04:05.082012   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:04:05.105629   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:04:05.131207   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:04:05.155031   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:04:05.180408   28131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:04:05.205496   28131 ssh_runner.go:195] Run: openssl version
	I0130 20:04:05.211500   28131 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0130 20:04:05.211834   28131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:04:05.222190   28131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:04:05.226860   28131 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:04:05.227125   28131 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:04:05.227185   28131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:04:05.233239   28131 command_runner.go:130] > 3ec20f2e
	I0130 20:04:05.233523   28131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:04:05.241840   28131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:04:05.251457   28131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:04:05.256085   28131 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:04:05.256210   28131 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:04:05.256263   28131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:04:05.261315   28131 command_runner.go:130] > b5213941
	I0130 20:04:05.261639   28131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:04:05.271730   28131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:04:05.281832   28131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:04:05.286591   28131 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:04:05.286788   28131 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:04:05.286821   28131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:04:05.293474   28131 command_runner.go:130] > 51391683
	I0130 20:04:05.293544   28131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:04:05.302286   28131 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:04:05.306525   28131 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0130 20:04:05.306918   28131 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0130 20:04:05.307005   28131 ssh_runner.go:195] Run: crio config
	I0130 20:04:05.364338   28131 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0130 20:04:05.364370   28131 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0130 20:04:05.364381   28131 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0130 20:04:05.364387   28131 command_runner.go:130] > #
	I0130 20:04:05.364398   28131 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0130 20:04:05.364408   28131 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0130 20:04:05.364418   28131 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0130 20:04:05.364429   28131 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0130 20:04:05.364439   28131 command_runner.go:130] > # reload'.
	I0130 20:04:05.364449   28131 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0130 20:04:05.364462   28131 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0130 20:04:05.364476   28131 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0130 20:04:05.364489   28131 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0130 20:04:05.364503   28131 command_runner.go:130] > [crio]
	I0130 20:04:05.364515   28131 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0130 20:04:05.364526   28131 command_runner.go:130] > # containers images, in this directory.
	I0130 20:04:05.364539   28131 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0130 20:04:05.364556   28131 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0130 20:04:05.364803   28131 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0130 20:04:05.364823   28131 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0130 20:04:05.364832   28131 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0130 20:04:05.364944   28131 command_runner.go:130] > storage_driver = "overlay"
	I0130 20:04:05.364966   28131 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0130 20:04:05.364976   28131 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0130 20:04:05.364989   28131 command_runner.go:130] > storage_option = [
	I0130 20:04:05.365162   28131 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0130 20:04:05.365193   28131 command_runner.go:130] > ]
	I0130 20:04:05.365204   28131 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0130 20:04:05.365211   28131 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0130 20:04:05.365546   28131 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0130 20:04:05.365558   28131 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0130 20:04:05.365564   28131 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0130 20:04:05.365569   28131 command_runner.go:130] > # always happen on a node reboot
	I0130 20:04:05.366018   28131 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0130 20:04:05.366035   28131 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0130 20:04:05.366048   28131 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0130 20:04:05.366064   28131 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0130 20:04:05.366908   28131 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0130 20:04:05.366925   28131 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0130 20:04:05.366933   28131 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0130 20:04:05.366938   28131 command_runner.go:130] > # internal_wipe = true
	I0130 20:04:05.366944   28131 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0130 20:04:05.366953   28131 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0130 20:04:05.366959   28131 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0130 20:04:05.366967   28131 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0130 20:04:05.366975   28131 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0130 20:04:05.366980   28131 command_runner.go:130] > [crio.api]
	I0130 20:04:05.366986   28131 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0130 20:04:05.366993   28131 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0130 20:04:05.367001   28131 command_runner.go:130] > # IP address on which the stream server will listen.
	I0130 20:04:05.367005   28131 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0130 20:04:05.367014   28131 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0130 20:04:05.367021   28131 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0130 20:04:05.367027   28131 command_runner.go:130] > # stream_port = "0"
	I0130 20:04:05.367037   28131 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0130 20:04:05.367044   28131 command_runner.go:130] > # stream_enable_tls = false
	I0130 20:04:05.367056   28131 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0130 20:04:05.367067   28131 command_runner.go:130] > # stream_idle_timeout = ""
	I0130 20:04:05.367077   28131 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0130 20:04:05.367088   28131 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0130 20:04:05.367094   28131 command_runner.go:130] > # minutes.
	I0130 20:04:05.367098   28131 command_runner.go:130] > # stream_tls_cert = ""
	I0130 20:04:05.367107   28131 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0130 20:04:05.367115   28131 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0130 20:04:05.367126   28131 command_runner.go:130] > # stream_tls_key = ""
	I0130 20:04:05.367135   28131 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0130 20:04:05.367145   28131 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0130 20:04:05.367157   28131 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0130 20:04:05.367168   28131 command_runner.go:130] > # stream_tls_ca = ""
	I0130 20:04:05.367179   28131 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0130 20:04:05.367186   28131 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0130 20:04:05.367193   28131 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0130 20:04:05.367200   28131 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0130 20:04:05.367212   28131 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0130 20:04:05.367220   28131 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0130 20:04:05.367224   28131 command_runner.go:130] > [crio.runtime]
	I0130 20:04:05.367235   28131 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0130 20:04:05.367248   28131 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0130 20:04:05.367258   28131 command_runner.go:130] > # "nofile=1024:2048"
	I0130 20:04:05.367288   28131 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0130 20:04:05.367299   28131 command_runner.go:130] > # default_ulimits = [
	I0130 20:04:05.367305   28131 command_runner.go:130] > # ]
	I0130 20:04:05.367320   28131 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0130 20:04:05.367331   28131 command_runner.go:130] > # no_pivot = false
	I0130 20:04:05.367341   28131 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0130 20:04:05.367354   28131 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0130 20:04:05.367367   28131 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0130 20:04:05.367382   28131 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0130 20:04:05.367392   28131 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0130 20:04:05.367402   28131 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0130 20:04:05.367407   28131 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0130 20:04:05.367411   28131 command_runner.go:130] > # Cgroup setting for conmon
	I0130 20:04:05.367420   28131 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0130 20:04:05.367425   28131 command_runner.go:130] > conmon_cgroup = "pod"
	I0130 20:04:05.367433   28131 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0130 20:04:05.367442   28131 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0130 20:04:05.367456   28131 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0130 20:04:05.367467   28131 command_runner.go:130] > conmon_env = [
	I0130 20:04:05.367482   28131 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0130 20:04:05.367490   28131 command_runner.go:130] > ]
	I0130 20:04:05.367500   28131 command_runner.go:130] > # Additional environment variables to set for all the
	I0130 20:04:05.367509   28131 command_runner.go:130] > # containers. These are overridden if set in the
	I0130 20:04:05.367517   28131 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0130 20:04:05.367527   28131 command_runner.go:130] > # default_env = [
	I0130 20:04:05.367533   28131 command_runner.go:130] > # ]
	I0130 20:04:05.367546   28131 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0130 20:04:05.367555   28131 command_runner.go:130] > # selinux = false
	I0130 20:04:05.367566   28131 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0130 20:04:05.367581   28131 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0130 20:04:05.367592   28131 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0130 20:04:05.367599   28131 command_runner.go:130] > # seccomp_profile = ""
	I0130 20:04:05.367604   28131 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0130 20:04:05.367612   28131 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0130 20:04:05.367620   28131 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0130 20:04:05.367627   28131 command_runner.go:130] > # which might increase security.
	I0130 20:04:05.367631   28131 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0130 20:04:05.367640   28131 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0130 20:04:05.367650   28131 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0130 20:04:05.367658   28131 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0130 20:04:05.367665   28131 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0130 20:04:05.367672   28131 command_runner.go:130] > # This option supports live configuration reload.
	I0130 20:04:05.367677   28131 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0130 20:04:05.367682   28131 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0130 20:04:05.367689   28131 command_runner.go:130] > # the cgroup blockio controller.
	I0130 20:04:05.367693   28131 command_runner.go:130] > # blockio_config_file = ""
	I0130 20:04:05.367702   28131 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0130 20:04:05.367707   28131 command_runner.go:130] > # irqbalance daemon.
	I0130 20:04:05.367714   28131 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0130 20:04:05.367721   28131 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0130 20:04:05.367728   28131 command_runner.go:130] > # This option supports live configuration reload.
	I0130 20:04:05.367732   28131 command_runner.go:130] > # rdt_config_file = ""
	I0130 20:04:05.367740   28131 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0130 20:04:05.367744   28131 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0130 20:04:05.367751   28131 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0130 20:04:05.367757   28131 command_runner.go:130] > # separate_pull_cgroup = ""
	I0130 20:04:05.367763   28131 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0130 20:04:05.367772   28131 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0130 20:04:05.367777   28131 command_runner.go:130] > # will be added.
	I0130 20:04:05.367781   28131 command_runner.go:130] > # default_capabilities = [
	I0130 20:04:05.367787   28131 command_runner.go:130] > # 	"CHOWN",
	I0130 20:04:05.367791   28131 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0130 20:04:05.367797   28131 command_runner.go:130] > # 	"FSETID",
	I0130 20:04:05.367801   28131 command_runner.go:130] > # 	"FOWNER",
	I0130 20:04:05.367805   28131 command_runner.go:130] > # 	"SETGID",
	I0130 20:04:05.367811   28131 command_runner.go:130] > # 	"SETUID",
	I0130 20:04:05.367815   28131 command_runner.go:130] > # 	"SETPCAP",
	I0130 20:04:05.367819   28131 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0130 20:04:05.367823   28131 command_runner.go:130] > # 	"KILL",
	I0130 20:04:05.367830   28131 command_runner.go:130] > # ]
	I0130 20:04:05.367840   28131 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0130 20:04:05.367852   28131 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0130 20:04:05.367862   28131 command_runner.go:130] > # default_sysctls = [
	I0130 20:04:05.367867   28131 command_runner.go:130] > # ]
	I0130 20:04:05.367879   28131 command_runner.go:130] > # List of devices on the host that a
	I0130 20:04:05.367892   28131 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0130 20:04:05.367910   28131 command_runner.go:130] > # allowed_devices = [
	I0130 20:04:05.367919   28131 command_runner.go:130] > # 	"/dev/fuse",
	I0130 20:04:05.367930   28131 command_runner.go:130] > # ]
	I0130 20:04:05.367941   28131 command_runner.go:130] > # List of additional devices. specified as
	I0130 20:04:05.367957   28131 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0130 20:04:05.367969   28131 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0130 20:04:05.367993   28131 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0130 20:04:05.368004   28131 command_runner.go:130] > # additional_devices = [
	I0130 20:04:05.368010   28131 command_runner.go:130] > # ]
	I0130 20:04:05.368016   28131 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0130 20:04:05.368023   28131 command_runner.go:130] > # cdi_spec_dirs = [
	I0130 20:04:05.368028   28131 command_runner.go:130] > # 	"/etc/cdi",
	I0130 20:04:05.368033   28131 command_runner.go:130] > # 	"/var/run/cdi",
	I0130 20:04:05.368036   28131 command_runner.go:130] > # ]
	I0130 20:04:05.368045   28131 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0130 20:04:05.368051   28131 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0130 20:04:05.368061   28131 command_runner.go:130] > # Defaults to false.
	I0130 20:04:05.368071   28131 command_runner.go:130] > # device_ownership_from_security_context = false
	I0130 20:04:05.368084   28131 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0130 20:04:05.368098   28131 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0130 20:04:05.368107   28131 command_runner.go:130] > # hooks_dir = [
	I0130 20:04:05.368115   28131 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0130 20:04:05.368122   28131 command_runner.go:130] > # ]
	I0130 20:04:05.368128   28131 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0130 20:04:05.368137   28131 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0130 20:04:05.368148   28131 command_runner.go:130] > # its default mounts from the following two files:
	I0130 20:04:05.368157   28131 command_runner.go:130] > #
	I0130 20:04:05.368168   28131 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0130 20:04:05.368182   28131 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0130 20:04:05.368194   28131 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0130 20:04:05.368203   28131 command_runner.go:130] > #
	I0130 20:04:05.368213   28131 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0130 20:04:05.368222   28131 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0130 20:04:05.368232   28131 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0130 20:04:05.368244   28131 command_runner.go:130] > #      only add mounts it finds in this file.
	I0130 20:04:05.368254   28131 command_runner.go:130] > #
	I0130 20:04:05.368261   28131 command_runner.go:130] > # default_mounts_file = ""
	I0130 20:04:05.368271   28131 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0130 20:04:05.368284   28131 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0130 20:04:05.368294   28131 command_runner.go:130] > pids_limit = 1024
	I0130 20:04:05.368303   28131 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0130 20:04:05.368311   28131 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0130 20:04:05.368321   28131 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0130 20:04:05.368338   28131 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0130 20:04:05.368349   28131 command_runner.go:130] > # log_size_max = -1
	I0130 20:04:05.368360   28131 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0130 20:04:05.368371   28131 command_runner.go:130] > # log_to_journald = false
	I0130 20:04:05.368386   28131 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0130 20:04:05.368394   28131 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0130 20:04:05.368400   28131 command_runner.go:130] > # Path to directory for container attach sockets.
	I0130 20:04:05.368412   28131 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0130 20:04:05.368425   28131 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0130 20:04:05.368435   28131 command_runner.go:130] > # bind_mount_prefix = ""
	I0130 20:04:05.368448   28131 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0130 20:04:05.368457   28131 command_runner.go:130] > # read_only = false
	I0130 20:04:05.368471   28131 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0130 20:04:05.368480   28131 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0130 20:04:05.368484   28131 command_runner.go:130] > # live configuration reload.
	I0130 20:04:05.368494   28131 command_runner.go:130] > # log_level = "info"
	I0130 20:04:05.368504   28131 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0130 20:04:05.368516   28131 command_runner.go:130] > # This option supports live configuration reload.
	I0130 20:04:05.368526   28131 command_runner.go:130] > # log_filter = ""
	I0130 20:04:05.368539   28131 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0130 20:04:05.368552   28131 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0130 20:04:05.368561   28131 command_runner.go:130] > # separated by comma.
	I0130 20:04:05.368565   28131 command_runner.go:130] > # uid_mappings = ""
	I0130 20:04:05.368577   28131 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0130 20:04:05.368591   28131 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0130 20:04:05.368602   28131 command_runner.go:130] > # separated by comma.
	I0130 20:04:05.368609   28131 command_runner.go:130] > # gid_mappings = ""
	I0130 20:04:05.368622   28131 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0130 20:04:05.368635   28131 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0130 20:04:05.368644   28131 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0130 20:04:05.368651   28131 command_runner.go:130] > # minimum_mappable_uid = -1
	I0130 20:04:05.368659   28131 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0130 20:04:05.368673   28131 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0130 20:04:05.368686   28131 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0130 20:04:05.368696   28131 command_runner.go:130] > # minimum_mappable_gid = -1
	I0130 20:04:05.368709   28131 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0130 20:04:05.368721   28131 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0130 20:04:05.368732   28131 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0130 20:04:05.368736   28131 command_runner.go:130] > # ctr_stop_timeout = 30
	I0130 20:04:05.368749   28131 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0130 20:04:05.368760   28131 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0130 20:04:05.368772   28131 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0130 20:04:05.368783   28131 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0130 20:04:05.368791   28131 command_runner.go:130] > drop_infra_ctr = false
	I0130 20:04:05.368802   28131 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0130 20:04:05.368814   28131 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0130 20:04:05.368826   28131 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0130 20:04:05.368834   28131 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0130 20:04:05.368845   28131 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0130 20:04:05.368857   28131 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0130 20:04:05.368868   28131 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0130 20:04:05.368883   28131 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0130 20:04:05.368893   28131 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0130 20:04:05.368907   28131 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0130 20:04:05.368919   28131 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0130 20:04:05.368930   28131 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0130 20:04:05.368941   28131 command_runner.go:130] > # default_runtime = "runc"
	I0130 20:04:05.368952   28131 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0130 20:04:05.368967   28131 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0130 20:04:05.368984   28131 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0130 20:04:05.368992   28131 command_runner.go:130] > # creation as a file is not desired either.
	I0130 20:04:05.369007   28131 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0130 20:04:05.369019   28131 command_runner.go:130] > # the hostname is being managed dynamically.
	I0130 20:04:05.369027   28131 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0130 20:04:05.369037   28131 command_runner.go:130] > # ]
	I0130 20:04:05.369047   28131 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0130 20:04:05.369061   28131 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0130 20:04:05.369074   28131 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0130 20:04:05.369083   28131 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0130 20:04:05.369089   28131 command_runner.go:130] > #
	I0130 20:04:05.369100   28131 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0130 20:04:05.369112   28131 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0130 20:04:05.369122   28131 command_runner.go:130] > #  runtime_type = "oci"
	I0130 20:04:05.369130   28131 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0130 20:04:05.369141   28131 command_runner.go:130] > #  privileged_without_host_devices = false
	I0130 20:04:05.369151   28131 command_runner.go:130] > #  allowed_annotations = []
	I0130 20:04:05.369158   28131 command_runner.go:130] > # Where:
	I0130 20:04:05.369164   28131 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0130 20:04:05.369175   28131 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0130 20:04:05.369218   28131 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0130 20:04:05.369236   28131 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0130 20:04:05.369242   28131 command_runner.go:130] > #   in $PATH.
	I0130 20:04:05.369249   28131 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0130 20:04:05.369259   28131 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0130 20:04:05.369269   28131 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0130 20:04:05.369279   28131 command_runner.go:130] > #   state.
	I0130 20:04:05.369293   28131 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0130 20:04:05.369305   28131 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0130 20:04:05.369319   28131 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0130 20:04:05.369332   28131 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0130 20:04:05.369347   28131 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0130 20:04:05.369362   28131 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0130 20:04:05.369374   28131 command_runner.go:130] > #   The currently recognized values are:
	I0130 20:04:05.369388   28131 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0130 20:04:05.369403   28131 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0130 20:04:05.369414   28131 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0130 20:04:05.369424   28131 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0130 20:04:05.369436   28131 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0130 20:04:05.369450   28131 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0130 20:04:05.369460   28131 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0130 20:04:05.369473   28131 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0130 20:04:05.369483   28131 command_runner.go:130] > #   should be moved to the container's cgroup
	I0130 20:04:05.369494   28131 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0130 20:04:05.369504   28131 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0130 20:04:05.369514   28131 command_runner.go:130] > runtime_type = "oci"
	I0130 20:04:05.369524   28131 command_runner.go:130] > runtime_root = "/run/runc"
	I0130 20:04:05.369534   28131 command_runner.go:130] > runtime_config_path = ""
	I0130 20:04:05.369542   28131 command_runner.go:130] > monitor_path = ""
	I0130 20:04:05.369552   28131 command_runner.go:130] > monitor_cgroup = ""
	I0130 20:04:05.369561   28131 command_runner.go:130] > monitor_exec_cgroup = ""
	I0130 20:04:05.369573   28131 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0130 20:04:05.369579   28131 command_runner.go:130] > # running containers
	I0130 20:04:05.369584   28131 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0130 20:04:05.369593   28131 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0130 20:04:05.369615   28131 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0130 20:04:05.369624   28131 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0130 20:04:05.369629   28131 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0130 20:04:05.369634   28131 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0130 20:04:05.369640   28131 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0130 20:04:05.369647   28131 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0130 20:04:05.369652   28131 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0130 20:04:05.369660   28131 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0130 20:04:05.369667   28131 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0130 20:04:05.369674   28131 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0130 20:04:05.369680   28131 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0130 20:04:05.369690   28131 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0130 20:04:05.369699   28131 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0130 20:04:05.369707   28131 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0130 20:04:05.369715   28131 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0130 20:04:05.369725   28131 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0130 20:04:05.369731   28131 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0130 20:04:05.369738   28131 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0130 20:04:05.369744   28131 command_runner.go:130] > # Example:
	I0130 20:04:05.369749   28131 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0130 20:04:05.369756   28131 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0130 20:04:05.369761   28131 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0130 20:04:05.369770   28131 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0130 20:04:05.369774   28131 command_runner.go:130] > # cpuset = 0
	I0130 20:04:05.369779   28131 command_runner.go:130] > # cpushares = "0-1"
	I0130 20:04:05.369783   28131 command_runner.go:130] > # Where:
	I0130 20:04:05.369788   28131 command_runner.go:130] > # The workload name is workload-type.
	I0130 20:04:05.369795   28131 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0130 20:04:05.369803   28131 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0130 20:04:05.369809   28131 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0130 20:04:05.369818   28131 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0130 20:04:05.369826   28131 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0130 20:04:05.369830   28131 command_runner.go:130] > # 
	I0130 20:04:05.369838   28131 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0130 20:04:05.369843   28131 command_runner.go:130] > #
	I0130 20:04:05.369849   28131 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0130 20:04:05.369856   28131 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0130 20:04:05.369862   28131 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0130 20:04:05.369870   28131 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0130 20:04:05.369876   28131 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0130 20:04:05.369883   28131 command_runner.go:130] > [crio.image]
	I0130 20:04:05.369889   28131 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0130 20:04:05.369897   28131 command_runner.go:130] > # default_transport = "docker://"
	I0130 20:04:05.369908   28131 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0130 20:04:05.369917   28131 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0130 20:04:05.369921   28131 command_runner.go:130] > # global_auth_file = ""
	I0130 20:04:05.369927   28131 command_runner.go:130] > # The image used to instantiate infra containers.
	I0130 20:04:05.369932   28131 command_runner.go:130] > # This option supports live configuration reload.
	I0130 20:04:05.369939   28131 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0130 20:04:05.369945   28131 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0130 20:04:05.369953   28131 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0130 20:04:05.369959   28131 command_runner.go:130] > # This option supports live configuration reload.
	I0130 20:04:05.369966   28131 command_runner.go:130] > # pause_image_auth_file = ""
	I0130 20:04:05.369971   28131 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0130 20:04:05.369979   28131 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0130 20:04:05.369985   28131 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0130 20:04:05.369993   28131 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0130 20:04:05.369997   28131 command_runner.go:130] > # pause_command = "/pause"
	I0130 20:04:05.370005   28131 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0130 20:04:05.370012   28131 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0130 20:04:05.370018   28131 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0130 20:04:05.370024   28131 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0130 20:04:05.370032   28131 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0130 20:04:05.370036   28131 command_runner.go:130] > # signature_policy = ""
	I0130 20:04:05.370043   28131 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0130 20:04:05.370049   28131 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0130 20:04:05.370056   28131 command_runner.go:130] > # changing them here.
	I0130 20:04:05.370060   28131 command_runner.go:130] > # insecure_registries = [
	I0130 20:04:05.370065   28131 command_runner.go:130] > # ]
	I0130 20:04:05.370071   28131 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0130 20:04:05.370077   28131 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0130 20:04:05.370081   28131 command_runner.go:130] > # image_volumes = "mkdir"
	I0130 20:04:05.370086   28131 command_runner.go:130] > # Temporary directory to use for storing big files
	I0130 20:04:05.370092   28131 command_runner.go:130] > # big_files_temporary_dir = ""
	I0130 20:04:05.370098   28131 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0130 20:04:05.370104   28131 command_runner.go:130] > # CNI plugins.
	I0130 20:04:05.370109   28131 command_runner.go:130] > [crio.network]
	I0130 20:04:05.370115   28131 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0130 20:04:05.370122   28131 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0130 20:04:05.370127   28131 command_runner.go:130] > # cni_default_network = ""
	I0130 20:04:05.370134   28131 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0130 20:04:05.370139   28131 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0130 20:04:05.370147   28131 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0130 20:04:05.370151   28131 command_runner.go:130] > # plugin_dirs = [
	I0130 20:04:05.370156   28131 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0130 20:04:05.370160   28131 command_runner.go:130] > # ]
	I0130 20:04:05.370168   28131 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0130 20:04:05.370172   28131 command_runner.go:130] > [crio.metrics]
	I0130 20:04:05.370177   28131 command_runner.go:130] > # Globally enable or disable metrics support.
	I0130 20:04:05.370183   28131 command_runner.go:130] > enable_metrics = true
	I0130 20:04:05.370188   28131 command_runner.go:130] > # Specify enabled metrics collectors.
	I0130 20:04:05.370193   28131 command_runner.go:130] > # Per default all metrics are enabled.
	I0130 20:04:05.370199   28131 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0130 20:04:05.370207   28131 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0130 20:04:05.370213   28131 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0130 20:04:05.370220   28131 command_runner.go:130] > # metrics_collectors = [
	I0130 20:04:05.370224   28131 command_runner.go:130] > # 	"operations",
	I0130 20:04:05.370231   28131 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0130 20:04:05.370235   28131 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0130 20:04:05.370242   28131 command_runner.go:130] > # 	"operations_errors",
	I0130 20:04:05.370246   28131 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0130 20:04:05.370253   28131 command_runner.go:130] > # 	"image_pulls_by_name",
	I0130 20:04:05.370257   28131 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0130 20:04:05.370263   28131 command_runner.go:130] > # 	"image_pulls_failures",
	I0130 20:04:05.370267   28131 command_runner.go:130] > # 	"image_pulls_successes",
	I0130 20:04:05.370273   28131 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0130 20:04:05.370278   28131 command_runner.go:130] > # 	"image_layer_reuse",
	I0130 20:04:05.370282   28131 command_runner.go:130] > # 	"containers_oom_total",
	I0130 20:04:05.370287   28131 command_runner.go:130] > # 	"containers_oom",
	I0130 20:04:05.370291   28131 command_runner.go:130] > # 	"processes_defunct",
	I0130 20:04:05.370295   28131 command_runner.go:130] > # 	"operations_total",
	I0130 20:04:05.370302   28131 command_runner.go:130] > # 	"operations_latency_seconds",
	I0130 20:04:05.370307   28131 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0130 20:04:05.370311   28131 command_runner.go:130] > # 	"operations_errors_total",
	I0130 20:04:05.370318   28131 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0130 20:04:05.370322   28131 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0130 20:04:05.370329   28131 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0130 20:04:05.370334   28131 command_runner.go:130] > # 	"image_pulls_success_total",
	I0130 20:04:05.370340   28131 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0130 20:04:05.370345   28131 command_runner.go:130] > # 	"containers_oom_count_total",
	I0130 20:04:05.370350   28131 command_runner.go:130] > # ]
	I0130 20:04:05.370355   28131 command_runner.go:130] > # The port on which the metrics server will listen.
	I0130 20:04:05.370362   28131 command_runner.go:130] > # metrics_port = 9090
	I0130 20:04:05.370366   28131 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0130 20:04:05.370373   28131 command_runner.go:130] > # metrics_socket = ""
	I0130 20:04:05.370379   28131 command_runner.go:130] > # The certificate for the secure metrics server.
	I0130 20:04:05.370387   28131 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0130 20:04:05.370393   28131 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0130 20:04:05.370400   28131 command_runner.go:130] > # certificate on any modification event.
	I0130 20:04:05.370404   28131 command_runner.go:130] > # metrics_cert = ""
	I0130 20:04:05.370410   28131 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0130 20:04:05.370415   28131 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0130 20:04:05.370420   28131 command_runner.go:130] > # metrics_key = ""
	I0130 20:04:05.370427   28131 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0130 20:04:05.370432   28131 command_runner.go:130] > [crio.tracing]
	I0130 20:04:05.370439   28131 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0130 20:04:05.370443   28131 command_runner.go:130] > # enable_tracing = false
	I0130 20:04:05.370451   28131 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0130 20:04:05.370455   28131 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0130 20:04:05.370463   28131 command_runner.go:130] > # Number of samples to collect per million spans.
	I0130 20:04:05.370467   28131 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0130 20:04:05.370475   28131 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0130 20:04:05.370481   28131 command_runner.go:130] > [crio.stats]
	I0130 20:04:05.370487   28131 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0130 20:04:05.370495   28131 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0130 20:04:05.370499   28131 command_runner.go:130] > # stats_collection_period = 0
	I0130 20:04:05.370524   28131 command_runner.go:130] ! time="2024-01-30 20:04:05.354968278Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0130 20:04:05.370537   28131 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0130 20:04:05.370591   28131 cni.go:84] Creating CNI manager for ""
	I0130 20:04:05.370600   28131 cni.go:136] 3 nodes found, recommending kindnet
	I0130 20:04:05.370609   28131 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:04:05.370625   28131 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-572652 NodeName:multinode-572652-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:04:05.370724   28131 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-572652-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:04:05.370767   28131 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-572652-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-572652 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:04:05.370811   28131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 20:04:05.381893   28131 command_runner.go:130] > kubeadm
	I0130 20:04:05.381919   28131 command_runner.go:130] > kubectl
	I0130 20:04:05.381924   28131 command_runner.go:130] > kubelet
	I0130 20:04:05.382026   28131 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:04:05.382087   28131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0130 20:04:05.393077   28131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0130 20:04:05.410594   28131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:04:05.428439   28131 ssh_runner.go:195] Run: grep 192.168.39.186	control-plane.minikube.internal$ /etc/hosts
	I0130 20:04:05.432501   28131 command_runner.go:130] > 192.168.39.186	control-plane.minikube.internal
	I0130 20:04:05.432573   28131 host.go:66] Checking if "multinode-572652" exists ...
	I0130 20:04:05.432820   28131 config.go:182] Loaded profile config "multinode-572652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:04:05.432946   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:04:05.432987   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:04:05.448744   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41281
	I0130 20:04:05.449150   28131 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:04:05.449604   28131 main.go:141] libmachine: Using API Version  1
	I0130 20:04:05.449626   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:04:05.449962   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:04:05.450140   28131 main.go:141] libmachine: (multinode-572652) Calling .DriverName
	I0130 20:04:05.450293   28131 start.go:304] JoinCluster: &{Name:multinode-572652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-572652 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:04:05.450438   28131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0130 20:04:05.450452   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHHostname
	I0130 20:04:05.453503   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:04:05.453906   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 20:04:05.453933   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:04:05.454070   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHPort
	I0130 20:04:05.454235   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 20:04:05.454409   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHUsername
	I0130 20:04:05.454584   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652/id_rsa Username:docker}
	I0130 20:04:05.652892   28131 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token o729k5.n9vjehlbnm3rt52h --discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 
	I0130 20:04:05.653150   28131 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0130 20:04:05.653200   28131 host.go:66] Checking if "multinode-572652" exists ...
	I0130 20:04:05.653503   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:04:05.653538   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:04:05.667938   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45469
	I0130 20:04:05.668314   28131 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:04:05.668737   28131 main.go:141] libmachine: Using API Version  1
	I0130 20:04:05.668759   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:04:05.669080   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:04:05.669224   28131 main.go:141] libmachine: (multinode-572652) Calling .DriverName
	I0130 20:04:05.669402   28131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-572652-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0130 20:04:05.669420   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHHostname
	I0130 20:04:05.672231   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:04:05.672677   28131 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:59:51 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 20:04:05.672701   28131 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 20:04:05.672842   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHPort
	I0130 20:04:05.673023   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 20:04:05.673173   28131 main.go:141] libmachine: (multinode-572652) Calling .GetSSHUsername
	I0130 20:04:05.673272   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652/id_rsa Username:docker}
	I0130 20:04:05.833881   28131 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0130 20:04:05.899101   28131 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-srbck, kube-system/kube-proxy-j5sr4
	I0130 20:04:08.916853   28131 command_runner.go:130] > node/multinode-572652-m03 cordoned
	I0130 20:04:08.916879   28131 command_runner.go:130] > pod "busybox-5b5d89c9d6-lfjc4" has DeletionTimestamp older than 1 seconds, skipping
	I0130 20:04:08.916889   28131 command_runner.go:130] > node/multinode-572652-m03 drained
	I0130 20:04:08.916908   28131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-572652-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.24748617s)
	I0130 20:04:08.916919   28131 node.go:108] successfully drained node "m03"
	I0130 20:04:08.917263   28131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:04:08.917499   28131 kapi.go:59] client config for multinode-572652: &rest.Config{Host:"https://192.168.39.186:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/client.crt", KeyFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/client.key", CAFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 20:04:08.917776   28131 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0130 20:04:08.917827   28131 round_trippers.go:463] DELETE https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m03
	I0130 20:04:08.917838   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:08.917849   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:08.917862   28131 round_trippers.go:473]     Content-Type: application/json
	I0130 20:04:08.917874   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:08.929118   28131 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0130 20:04:08.929137   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:08.929143   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:08.929149   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:08.929154   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:08.929162   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:08.929170   28131 round_trippers.go:580]     Content-Length: 171
	I0130 20:04:08.929177   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:08 GMT
	I0130 20:04:08.929184   28131 round_trippers.go:580]     Audit-Id: 25b7cd7f-293f-448f-b826-8dcd5c360cff
	I0130 20:04:08.929361   28131 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-572652-m03","kind":"nodes","uid":"6e43dfc4-d01d-44de-b61c-e668bf1447ff"}}
	I0130 20:04:08.929412   28131 node.go:124] successfully deleted node "m03"
	I0130 20:04:08.929424   28131 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0130 20:04:08.929449   28131 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0130 20:04:08.929471   28131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o729k5.n9vjehlbnm3rt52h --discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-572652-m03"
	I0130 20:04:08.988083   28131 command_runner.go:130] ! W0130 20:04:08.982775    2367 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0130 20:04:08.988361   28131 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0130 20:04:09.142382   28131 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0130 20:04:09.142416   28131 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0130 20:04:09.912674   28131 command_runner.go:130] > [preflight] Running pre-flight checks
	I0130 20:04:09.912704   28131 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0130 20:04:09.912719   28131 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0130 20:04:09.912732   28131 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 20:04:09.912743   28131 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 20:04:09.912752   28131 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0130 20:04:09.912762   28131 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0130 20:04:09.912777   28131 command_runner.go:130] > This node has joined the cluster:
	I0130 20:04:09.912791   28131 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0130 20:04:09.912805   28131 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0130 20:04:09.912819   28131 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0130 20:04:09.913061   28131 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0130 20:04:10.168849   28131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218 minikube.k8s.io/name=multinode-572652 minikube.k8s.io/updated_at=2024_01_30T20_04_10_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:04:10.269837   28131 command_runner.go:130] > node/multinode-572652-m02 labeled
	I0130 20:04:10.285705   28131 command_runner.go:130] > node/multinode-572652-m03 labeled
	I0130 20:04:10.288162   28131 start.go:306] JoinCluster complete in 4.837865188s
	I0130 20:04:10.288186   28131 cni.go:84] Creating CNI manager for ""
	I0130 20:04:10.288191   28131 cni.go:136] 3 nodes found, recommending kindnet
	I0130 20:04:10.288233   28131 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0130 20:04:10.294349   28131 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0130 20:04:10.294384   28131 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0130 20:04:10.294395   28131 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0130 20:04:10.294405   28131 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0130 20:04:10.294417   28131 command_runner.go:130] > Access: 2024-01-30 19:59:52.571662116 +0000
	I0130 20:04:10.294425   28131 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0130 20:04:10.294433   28131 command_runner.go:130] > Change: 2024-01-30 19:59:50.660662116 +0000
	I0130 20:04:10.294439   28131 command_runner.go:130] >  Birth: -
	I0130 20:04:10.294484   28131 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0130 20:04:10.294494   28131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0130 20:04:10.317397   28131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0130 20:04:10.689627   28131 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0130 20:04:10.689654   28131 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0130 20:04:10.689663   28131 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0130 20:04:10.689671   28131 command_runner.go:130] > daemonset.apps/kindnet configured
	I0130 20:04:10.690277   28131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:04:10.690579   28131 kapi.go:59] client config for multinode-572652: &rest.Config{Host:"https://192.168.39.186:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/client.crt", KeyFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/client.key", CAFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 20:04:10.690945   28131 round_trippers.go:463] GET https://192.168.39.186:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0130 20:04:10.690958   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:10.690969   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:10.690980   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:10.693074   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:04:10.693093   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:10.693102   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:10.693109   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:10.693115   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:10.693122   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:10.693127   28131 round_trippers.go:580]     Content-Length: 291
	I0130 20:04:10.693133   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:10 GMT
	I0130 20:04:10.693138   28131 round_trippers.go:580]     Audit-Id: 96c582b0-1c23-46fa-b7f6-c84641ef1971
	I0130 20:04:10.693154   28131 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2034a0c9-1da9-4b9e-a99f-a32637cca2aa","resourceVersion":"871","creationTimestamp":"2024-01-30T19:50:00Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0130 20:04:10.693222   28131 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-572652" context rescaled to 1 replicas
	I0130 20:04:10.693246   28131 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0130 20:04:10.695045   28131 out.go:177] * Verifying Kubernetes components...
	I0130 20:04:10.696482   28131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:04:10.711705   28131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:04:10.712095   28131 kapi.go:59] client config for multinode-572652: &rest.Config{Host:"https://192.168.39.186:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/client.crt", KeyFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/profiles/multinode-572652/client.key", CAFile:"/home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 20:04:10.712362   28131 node_ready.go:35] waiting up to 6m0s for node "multinode-572652-m03" to be "Ready" ...
	I0130 20:04:10.712419   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m03
	I0130 20:04:10.712426   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:10.712434   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:10.712440   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:10.715022   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:04:10.715036   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:10.715045   28131 round_trippers.go:580]     Audit-Id: 0e31c2fc-6605-4000-9831-a796084b7f2d
	I0130 20:04:10.715054   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:10.715062   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:10.715071   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:10.715079   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:10.715086   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:10 GMT
	I0130 20:04:10.715196   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652-m03","uid":"0326d05b-24ae-49ca-9a88-d2c57b57ec0a","resourceVersion":"1201","creationTimestamp":"2024-01-30T20:04:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T20_04_10_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0130 20:04:10.715462   28131 node_ready.go:49] node "multinode-572652-m03" has status "Ready":"True"
	I0130 20:04:10.715476   28131 node_ready.go:38] duration metric: took 3.100411ms waiting for node "multinode-572652-m03" to be "Ready" ...
	I0130 20:04:10.715484   28131 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:04:10.715528   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods
	I0130 20:04:10.715535   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:10.715541   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:10.715548   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:10.719075   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:04:10.719090   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:10.719099   28131 round_trippers.go:580]     Audit-Id: c8beb92a-a8e2-452d-85a3-065eacbd5e8c
	I0130 20:04:10.719108   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:10.719116   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:10.719124   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:10.719133   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:10.719141   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:10 GMT
	I0130 20:04:10.720745   28131 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1208"},"items":[{"metadata":{"name":"coredns-5dd5756b68-579fc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8ed4a94c-417c-480d-9f9a-4101a5103066","resourceVersion":"850","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"39fdf010-d57e-4327-975b-6a5e640212c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fdf010-d57e-4327-975b-6a5e640212c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82079 chars]
	I0130 20:04:10.723632   28131 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-579fc" in "kube-system" namespace to be "Ready" ...
	I0130 20:04:10.723700   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-579fc
	I0130 20:04:10.723707   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:10.723715   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:10.723721   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:10.726362   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:04:10.726377   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:10.726383   28131 round_trippers.go:580]     Audit-Id: bc05f00e-0636-4ac2-bf0c-016e2bdcd3c1
	I0130 20:04:10.726389   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:10.726394   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:10.726399   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:10.726404   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:10.726410   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:10 GMT
	I0130 20:04:10.726640   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-579fc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8ed4a94c-417c-480d-9f9a-4101a5103066","resourceVersion":"850","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"39fdf010-d57e-4327-975b-6a5e640212c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fdf010-d57e-4327-975b-6a5e640212c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0130 20:04:10.727096   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:04:10.727110   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:10.727118   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:10.727124   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:10.729255   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:04:10.729272   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:10.729281   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:10.729289   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:10.729297   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:10.729305   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:10 GMT
	I0130 20:04:10.729316   28131 round_trippers.go:580]     Audit-Id: 46e95110-d27b-47e3-b46f-0550af624d7d
	I0130 20:04:10.729329   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:10.729539   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"886","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 20:04:10.729910   28131 pod_ready.go:92] pod "coredns-5dd5756b68-579fc" in "kube-system" namespace has status "Ready":"True"
	I0130 20:04:10.729938   28131 pod_ready.go:81] duration metric: took 6.278489ms waiting for pod "coredns-5dd5756b68-579fc" in "kube-system" namespace to be "Ready" ...
	I0130 20:04:10.729954   28131 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:04:10.729996   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-572652
	I0130 20:04:10.730003   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:10.730010   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:10.730016   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:10.732002   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:04:10.732017   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:10.732025   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:10.732034   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:10.732042   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:10.732057   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:10.732066   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:10 GMT
	I0130 20:04:10.732079   28131 round_trippers.go:580]     Audit-Id: b3b31187-8165-4ff3-a9b8-08a5ebcc5929
	I0130 20:04:10.732469   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-572652","namespace":"kube-system","uid":"e44ed93f-1c85-4d27-bacb-f454d6eaa0b6","resourceVersion":"857","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.186:2379","kubernetes.io/config.hash":"3d195cc1c68274636debff677374c054","kubernetes.io/config.mirror":"3d195cc1c68274636debff677374c054","kubernetes.io/config.seen":"2024-01-30T19:50:00.428284843Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0130 20:04:10.732756   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:04:10.732768   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:10.732778   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:10.732787   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:10.734599   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:04:10.734617   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:10.734627   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:10 GMT
	I0130 20:04:10.734635   28131 round_trippers.go:580]     Audit-Id: 94a3e996-b027-4c9d-b052-cbcb9473bbde
	I0130 20:04:10.734642   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:10.734650   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:10.734667   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:10.734675   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:10.734895   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"886","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 20:04:10.735156   28131 pod_ready.go:92] pod "etcd-multinode-572652" in "kube-system" namespace has status "Ready":"True"
	I0130 20:04:10.735170   28131 pod_ready.go:81] duration metric: took 5.209511ms waiting for pod "etcd-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:04:10.735191   28131 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:04:10.735243   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-572652
	I0130 20:04:10.735253   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:10.735277   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:10.735291   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:10.737123   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:04:10.737141   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:10.737151   28131 round_trippers.go:580]     Audit-Id: 9decff7e-4932-4801-84bf-01154abfe943
	I0130 20:04:10.737159   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:10.737166   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:10.737177   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:10.737185   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:10.737200   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:10 GMT
	I0130 20:04:10.737341   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-572652","namespace":"kube-system","uid":"fc451607-277c-45fe-a0f9-a3502db0251b","resourceVersion":"863","creationTimestamp":"2024-01-30T19:49:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.186:8443","kubernetes.io/config.hash":"d6f18dcbbdea790709196864d2f77f8b","kubernetes.io/config.mirror":"d6f18dcbbdea790709196864d2f77f8b","kubernetes.io/config.seen":"2024-01-30T19:49:51.352745901Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:49:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0130 20:04:10.737648   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:04:10.737660   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:10.737670   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:10.737679   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:10.739282   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:04:10.739304   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:10.739315   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:10.739323   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:10.739330   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:10 GMT
	I0130 20:04:10.739338   28131 round_trippers.go:580]     Audit-Id: 4be98db0-f164-4991-b7c0-bfa2f2432cc2
	I0130 20:04:10.739346   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:10.739354   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:10.739556   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"886","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 20:04:10.739914   28131 pod_ready.go:92] pod "kube-apiserver-multinode-572652" in "kube-system" namespace has status "Ready":"True"
	I0130 20:04:10.739942   28131 pod_ready.go:81] duration metric: took 4.738519ms waiting for pod "kube-apiserver-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:04:10.739955   28131 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:04:10.740007   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-572652
	I0130 20:04:10.740017   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:10.740027   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:10.740039   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:10.742062   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:04:10.742077   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:10.742086   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:10.742095   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:10.742104   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:10.742118   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:10 GMT
	I0130 20:04:10.742127   28131 round_trippers.go:580]     Audit-Id: 66ad9dd1-9a58-48f1-92fb-b67fc4fbfac2
	I0130 20:04:10.742139   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:10.742640   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-572652","namespace":"kube-system","uid":"ce85a6a9-3600-41a9-824a-d01c009aead2","resourceVersion":"877","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c7787439db55e175a329eec0f92a7a11","kubernetes.io/config.mirror":"c7787439db55e175a329eec0f92a7a11","kubernetes.io/config.seen":"2024-01-30T19:50:00.428289181Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0130 20:04:10.743004   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:04:10.743017   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:10.743028   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:10.743036   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:10.744786   28131 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 20:04:10.744805   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:10.744814   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:10.744822   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:10 GMT
	I0130 20:04:10.744834   28131 round_trippers.go:580]     Audit-Id: 2a04cc36-2f8d-46e9-817b-76d80e356894
	I0130 20:04:10.744843   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:10.744853   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:10.744862   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:10.745002   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"886","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 20:04:10.745336   28131 pod_ready.go:92] pod "kube-controller-manager-multinode-572652" in "kube-system" namespace has status "Ready":"True"
	I0130 20:04:10.745351   28131 pod_ready.go:81] duration metric: took 5.387953ms waiting for pod "kube-controller-manager-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:04:10.745364   28131 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hx9f7" in "kube-system" namespace to be "Ready" ...
	I0130 20:04:10.912747   28131 request.go:629] Waited for 167.330632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hx9f7
	I0130 20:04:10.912799   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hx9f7
	I0130 20:04:10.912806   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:10.912817   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:10.912833   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:10.915627   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:04:10.915642   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:10.915648   28131 round_trippers.go:580]     Audit-Id: 947737c4-ae92-41a5-b999-7a628eb61069
	I0130 20:04:10.915653   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:10.915659   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:10.915667   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:10.915676   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:10.915684   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:10 GMT
	I0130 20:04:10.915951   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hx9f7","generateName":"kube-proxy-","namespace":"kube-system","uid":"95d8777b-0e61-4662-a7a6-1fb5e7b4ae29","resourceVersion":"773","creationTimestamp":"2024-01-30T19:50:12Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1e1c3365-a3ba-434b-96dd-44f8afef011c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e1c3365-a3ba-434b-96dd-44f8afef011c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0130 20:04:11.112632   28131 request.go:629] Waited for 196.279576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:04:11.112687   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:04:11.112704   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:11.112714   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:11.112724   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:11.115559   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:04:11.115579   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:11.115585   28131 round_trippers.go:580]     Audit-Id: 05bc8b76-191c-41c5-87e1-0b68687ce572
	I0130 20:04:11.115592   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:11.115601   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:11.115609   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:11.115618   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:11.115626   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:11 GMT
	I0130 20:04:11.115960   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"886","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 20:04:11.116291   28131 pod_ready.go:92] pod "kube-proxy-hx9f7" in "kube-system" namespace has status "Ready":"True"
	I0130 20:04:11.116308   28131 pod_ready.go:81] duration metric: took 370.934207ms waiting for pod "kube-proxy-hx9f7" in "kube-system" namespace to be "Ready" ...
	I0130 20:04:11.116318   28131 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j5sr4" in "kube-system" namespace to be "Ready" ...
	I0130 20:04:11.313323   28131 request.go:629] Waited for 196.943687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5sr4
	I0130 20:04:11.313390   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5sr4
	I0130 20:04:11.313395   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:11.313403   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:11.313408   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:11.318993   28131 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0130 20:04:11.319017   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:11.319026   28131 round_trippers.go:580]     Audit-Id: dc1144e1-bebf-4e35-a9ab-bf95a7d6eea9
	I0130 20:04:11.319034   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:11.319040   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:11.319048   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:11.319056   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:11.319065   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:11 GMT
	I0130 20:04:11.319393   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j5sr4","generateName":"kube-proxy-","namespace":"kube-system","uid":"d6bacfbc-c1e8-4dd2-bd48-778725887a72","resourceVersion":"1204","creationTimestamp":"2024-01-30T19:51:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1e1c3365-a3ba-434b-96dd-44f8afef011c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e1c3365-a3ba-434b-96dd-44f8afef011c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I0130 20:04:11.513051   28131 request.go:629] Waited for 193.247252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m03
	I0130 20:04:11.513135   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m03
	I0130 20:04:11.513150   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:11.513162   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:11.513176   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:11.516287   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:04:11.516306   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:11.516313   28131 round_trippers.go:580]     Audit-Id: 3d1e339d-9626-43b2-ad72-500f8b19ec1a
	I0130 20:04:11.516319   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:11.516327   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:11.516335   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:11.516346   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:11.516359   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:11 GMT
	I0130 20:04:11.516561   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652-m03","uid":"0326d05b-24ae-49ca-9a88-d2c57b57ec0a","resourceVersion":"1201","creationTimestamp":"2024-01-30T20:04:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T20_04_10_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0130 20:04:11.712857   28131 request.go:629] Waited for 95.575824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5sr4
	I0130 20:04:11.712947   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5sr4
	I0130 20:04:11.712958   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:11.712970   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:11.712984   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:11.715744   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:04:11.715771   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:11.715778   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:11 GMT
	I0130 20:04:11.715786   28131 round_trippers.go:580]     Audit-Id: c513416c-55f5-425f-99d8-16b6c74aa905
	I0130 20:04:11.715795   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:11.715803   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:11.715816   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:11.715829   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:11.716308   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j5sr4","generateName":"kube-proxy-","namespace":"kube-system","uid":"d6bacfbc-c1e8-4dd2-bd48-778725887a72","resourceVersion":"1217","creationTimestamp":"2024-01-30T19:51:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1e1c3365-a3ba-434b-96dd-44f8afef011c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e1c3365-a3ba-434b-96dd-44f8afef011c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0130 20:04:11.913075   28131 request.go:629] Waited for 196.38058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m03
	I0130 20:04:11.913145   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m03
	I0130 20:04:11.913150   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:11.913158   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:11.913167   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:11.916026   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:04:11.916042   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:11.916048   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:11.916054   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:11 GMT
	I0130 20:04:11.916060   28131 round_trippers.go:580]     Audit-Id: bd2b1aa9-a68c-4fdd-b1f0-55ff1cf856f4
	I0130 20:04:11.916069   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:11.916082   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:11.916093   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:11.916378   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652-m03","uid":"0326d05b-24ae-49ca-9a88-d2c57b57ec0a","resourceVersion":"1201","creationTimestamp":"2024-01-30T20:04:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T20_04_10_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0130 20:04:11.916634   28131 pod_ready.go:92] pod "kube-proxy-j5sr4" in "kube-system" namespace has status "Ready":"True"
	I0130 20:04:11.916647   28131 pod_ready.go:81] duration metric: took 800.322333ms waiting for pod "kube-proxy-j5sr4" in "kube-system" namespace to be "Ready" ...
	I0130 20:04:11.916656   28131 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rbwvp" in "kube-system" namespace to be "Ready" ...
	I0130 20:04:12.113092   28131 request.go:629] Waited for 196.360639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rbwvp
	I0130 20:04:12.113148   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rbwvp
	I0130 20:04:12.113153   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:12.113160   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:12.113167   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:12.116035   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:04:12.116058   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:12.116065   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:12.116070   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:12.116075   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:12.116081   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:12 GMT
	I0130 20:04:12.116085   28131 round_trippers.go:580]     Audit-Id: a5c3cb87-94be-47fe-ae59-c9764a172343
	I0130 20:04:12.116091   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:12.116398   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rbwvp","generateName":"kube-proxy-","namespace":"kube-system","uid":"2cd3c663-bf55-49b2-9120-101ac59912fd","resourceVersion":"1042","creationTimestamp":"2024-01-30T19:50:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1e1c3365-a3ba-434b-96dd-44f8afef011c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e1c3365-a3ba-434b-96dd-44f8afef011c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0130 20:04:12.313140   28131 request.go:629] Waited for 196.351923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m02
	I0130 20:04:12.313192   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652-m02
	I0130 20:04:12.313197   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:12.313205   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:12.313210   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:12.318738   28131 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0130 20:04:12.318759   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:12.318766   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:12 GMT
	I0130 20:04:12.318771   28131 round_trippers.go:580]     Audit-Id: 1db3f025-02ad-4386-bb34-0fc3a5c1afb4
	I0130 20:04:12.318776   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:12.318781   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:12.318786   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:12.318791   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:12.319156   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652-m02","uid":"0044ec35-b13c-4106-b118-c3ac58e05ff0","resourceVersion":"1200","creationTimestamp":"2024-01-30T20:02:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T20_04_10_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T20:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0130 20:04:12.319418   28131 pod_ready.go:92] pod "kube-proxy-rbwvp" in "kube-system" namespace has status "Ready":"True"
	I0130 20:04:12.319439   28131 pod_ready.go:81] duration metric: took 402.769995ms waiting for pod "kube-proxy-rbwvp" in "kube-system" namespace to be "Ready" ...
	I0130 20:04:12.319449   28131 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:04:12.512950   28131 request.go:629] Waited for 193.425653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-572652
	I0130 20:04:12.513027   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-572652
	I0130 20:04:12.513041   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:12.513052   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:12.513062   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:12.516216   28131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 20:04:12.516239   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:12.516246   28131 round_trippers.go:580]     Audit-Id: 748bbc08-f997-4d56-96c1-2455b7d247d3
	I0130 20:04:12.516253   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:12.516258   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:12.516263   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:12.516269   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:12.516274   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:12 GMT
	I0130 20:04:12.516430   28131 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-572652","namespace":"kube-system","uid":"ee4d8608-40cb-4281-ac1f-bc5ac41ff27d","resourceVersion":"855","creationTimestamp":"2024-01-30T19:50:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"85e85fa7283981ab3a029cbc7c4cbcc1","kubernetes.io/config.mirror":"85e85fa7283981ab3a029cbc7c4cbcc1","kubernetes.io/config.seen":"2024-01-30T19:50:00.428289879Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T19:50:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0130 20:04:12.712546   28131 request.go:629] Waited for 195.676309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:04:12.712609   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes/multinode-572652
	I0130 20:04:12.712616   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:12.712626   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:12.712634   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:12.717109   28131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0130 20:04:12.717129   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:12.717136   28131 round_trippers.go:580]     Audit-Id: 34cf3501-f9f2-481e-b984-9548086dae69
	I0130 20:04:12.717142   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:12.717147   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:12.717160   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:12.717166   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:12.717179   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:12 GMT
	I0130 20:04:12.717385   28131 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"886","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T19:49:56Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 20:04:12.717791   28131 pod_ready.go:92] pod "kube-scheduler-multinode-572652" in "kube-system" namespace has status "Ready":"True"
	I0130 20:04:12.717817   28131 pod_ready.go:81] duration metric: took 398.355399ms waiting for pod "kube-scheduler-multinode-572652" in "kube-system" namespace to be "Ready" ...
	I0130 20:04:12.717832   28131 pod_ready.go:38] duration metric: took 2.002339523s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:04:12.717850   28131 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:04:12.717896   28131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:04:12.734105   28131 system_svc.go:56] duration metric: took 16.246289ms WaitForService to wait for kubelet.
	I0130 20:04:12.734128   28131 kubeadm.go:581] duration metric: took 2.040865157s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:04:12.734144   28131 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:04:12.913461   28131 request.go:629] Waited for 179.239785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.186:8443/api/v1/nodes
	I0130 20:04:12.913532   28131 round_trippers.go:463] GET https://192.168.39.186:8443/api/v1/nodes
	I0130 20:04:12.913536   28131 round_trippers.go:469] Request Headers:
	I0130 20:04:12.913544   28131 round_trippers.go:473]     Accept: application/json, */*
	I0130 20:04:12.913550   28131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 20:04:12.916519   28131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 20:04:12.916536   28131 round_trippers.go:577] Response Headers:
	I0130 20:04:12.916542   28131 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cc3aa8d0-d089-4af0-994e-781d06b5f38f
	I0130 20:04:12.916548   28131 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f0fe528-9c56-4c83-9599-c4a40b569ca1
	I0130 20:04:12.916553   28131 round_trippers.go:580]     Date: Tue, 30 Jan 2024 20:04:12 GMT
	I0130 20:04:12.916558   28131 round_trippers.go:580]     Audit-Id: 76cf61da-0224-4cd2-af7d-78e054d8e611
	I0130 20:04:12.916563   28131 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 20:04:12.916569   28131 round_trippers.go:580]     Content-Type: application/json
	I0130 20:04:12.917447   28131 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1219"},"items":[{"metadata":{"name":"multinode-572652","uid":"2035f876-c94f-4e6f-98dd-3c7dd3595a6a","resourceVersion":"886","creationTimestamp":"2024-01-30T19:49:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-572652","kubernetes.io/os":"linux","minikube.k8s.io/commit":"274d15c48919de599d1c531208ca35671bcbf218","minikube.k8s.io/name":"multinode-572652","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T19_50_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16238 chars]
	I0130 20:04:12.918024   28131 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:04:12.918041   28131 node_conditions.go:123] node cpu capacity is 2
	I0130 20:04:12.918049   28131 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:04:12.918053   28131 node_conditions.go:123] node cpu capacity is 2
	I0130 20:04:12.918057   28131 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:04:12.918060   28131 node_conditions.go:123] node cpu capacity is 2
	I0130 20:04:12.918064   28131 node_conditions.go:105] duration metric: took 183.916599ms to run NodePressure ...
	I0130 20:04:12.918073   28131 start.go:228] waiting for startup goroutines ...
	I0130 20:04:12.918089   28131 start.go:242] writing updated cluster config ...
	I0130 20:04:12.918362   28131 ssh_runner.go:195] Run: rm -f paused
	I0130 20:04:12.965710   28131 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 20:04:12.967712   28131 out.go:177] * Done! kubectl is now configured to use "multinode-572652" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 19:59:51 UTC, ends at Tue 2024-01-30 20:04:14 UTC. --
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.044906781Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706645054044891477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0fc59fb4-dbd3-4540-8be9-924881397c81 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.045851630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=642fe258-3e92-49bb-b8b9-9f8b01d4f5c5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.045923768Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=642fe258-3e92-49bb-b8b9-9f8b01d4f5c5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.046130799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7e176929c4f6c6167440567edc91fc26bcf3b2e2a09b2cfdde763ef2e93a94a,PodSandboxId:d746280ab49d75cb6db7908e4451d24bc1d2e68ff78397d81f3f969c3a198671,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706644858669197202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1eb366d-4b7c-4900-9e2e-83ebcee3d015,},Annotations:map[string]string{io.kubernetes.container.hash: 864f1847,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e943636a73f5b4af9be78c4c9242f22bdb11f9449a6df116a1e5c130a59a3928,PodSandboxId:199e41243845ea26f04b88f13217f8b9844bfefac0c73a53ceaa15f0d4f4cab6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1706644837945734490,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-sbgq8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cb8e004e-7fa8-4a85-b493-390e1fd29719,},Annotations:map[string]string{io.kubernetes.container.hash: c0f332e1,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a16232e2f6727bfb0ead92294bf7fbc5f60d396fb52c10aef27fbac6048c1c,PodSandboxId:1dc728ecf9786a676df9a91c0585df37a52993975e2e53982b1e7c2caf7954ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706644834955889206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-579fc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ed4a94c-417c-480d-9f9a-4101a5103066,},Annotations:map[string]string{io.kubernetes.container.hash: 983b021e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3caba374245a41a87b97bd759a0e1121ff27f3451e82b9c6e69b83b3522c02f,PodSandboxId:f0195166a9ac86d87d769fabab203becb0cda80bff513573069afed8870dd414,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1706644829838391967,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzx54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 87aab713-13c1-4fd2-bc90-73b2998226dc,},Annotations:map[string]string{io.kubernetes.container.hash: e843de79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:badc83321619bbef2f188a7fd69413ee5cd59498e3e8da1abb143690753485f8,PodSandboxId:d746280ab49d75cb6db7908e4451d24bc1d2e68ff78397d81f3f969c3a198671,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706644827393071120,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a1eb366d-4b7c-4900-9e2e-83ebcee3d015,},Annotations:map[string]string{io.kubernetes.container.hash: 864f1847,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c72449596055b292210d26b10cf658612b05649ff8f2c36e630a0d6eb77c165e,PodSandboxId:3c92abb5f30c156c368e3f4ee98df4118d1d7fc5895dac1ec68be42d7a7c2932,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706644827288043840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx9f7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95d8777b-0e61-4662-a7a6-1fb5e7b4
ae29,},Annotations:map[string]string{io.kubernetes.container.hash: abb540aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8295c64db63bbe1d715acc035968b7f49bc6d6f86510683cd26cf990b4b9884,PodSandboxId:758977e7d619f28c91d6bf057cf106caa772921e3d560db4628d23e0d40aff32,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706644820953624553,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-572652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d195cc1c68274636debff677374c054,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: a4c49fba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d61da25db0bf1ee89777b06f1c63495712f70cb407815a697af8f1ebf47d8110,PodSandboxId:181e301e6b1e38061a8cb2aecd0027f992c0ecbc4d2019149326d9acd098330f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706644821074763250,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-572652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e85fa7283981ab3a029cbc7c4cbcc1,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28562e55286dbd1b6b2542e329a4c8a63bd25b1150086b965ebc3ce9b1b03ee,PodSandboxId:5461122dc33b880c598609bc34f9d945796afdc025ab64d6a5a5fcca52f6a50f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706644820514540266,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-572652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6f18dcbbdea790709196864d2f77f8b,},Annotations:map[string]string{io.kubernetes.container.hash: bdedfea2,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d21225d28655d404694cddea91147b6fd3113009f8624387b35c1bc0d20109d,PodSandboxId:7675e7f1c3fb00265729af16bca58095ebc7d7ddf049f1d0ff7b414e8cb8d4fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706644820407437650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-572652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7787439db55e175a329eec0f92a7a11,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=642fe258-3e92-49bb-b8b9-9f8b01d4f5c5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.093548778Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=89de37c2-3b46-4056-97ee-82ace45cbbe9 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.093627130Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=89de37c2-3b46-4056-97ee-82ace45cbbe9 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.094862819Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1e12f473-bfad-4f43-8df2-b3f5d7ffbb05 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.095244727Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706645054095233098,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1e12f473-bfad-4f43-8df2-b3f5d7ffbb05 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.095916017Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3c7a8e9f-82cf-4362-a835-2e60fa6fbaf9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.095986608Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3c7a8e9f-82cf-4362-a835-2e60fa6fbaf9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.096192451Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7e176929c4f6c6167440567edc91fc26bcf3b2e2a09b2cfdde763ef2e93a94a,PodSandboxId:d746280ab49d75cb6db7908e4451d24bc1d2e68ff78397d81f3f969c3a198671,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706644858669197202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1eb366d-4b7c-4900-9e2e-83ebcee3d015,},Annotations:map[string]string{io.kubernetes.container.hash: 864f1847,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e943636a73f5b4af9be78c4c9242f22bdb11f9449a6df116a1e5c130a59a3928,PodSandboxId:199e41243845ea26f04b88f13217f8b9844bfefac0c73a53ceaa15f0d4f4cab6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1706644837945734490,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-sbgq8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cb8e004e-7fa8-4a85-b493-390e1fd29719,},Annotations:map[string]string{io.kubernetes.container.hash: c0f332e1,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a16232e2f6727bfb0ead92294bf7fbc5f60d396fb52c10aef27fbac6048c1c,PodSandboxId:1dc728ecf9786a676df9a91c0585df37a52993975e2e53982b1e7c2caf7954ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706644834955889206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-579fc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ed4a94c-417c-480d-9f9a-4101a5103066,},Annotations:map[string]string{io.kubernetes.container.hash: 983b021e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3caba374245a41a87b97bd759a0e1121ff27f3451e82b9c6e69b83b3522c02f,PodSandboxId:f0195166a9ac86d87d769fabab203becb0cda80bff513573069afed8870dd414,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1706644829838391967,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzx54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 87aab713-13c1-4fd2-bc90-73b2998226dc,},Annotations:map[string]string{io.kubernetes.container.hash: e843de79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:badc83321619bbef2f188a7fd69413ee5cd59498e3e8da1abb143690753485f8,PodSandboxId:d746280ab49d75cb6db7908e4451d24bc1d2e68ff78397d81f3f969c3a198671,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706644827393071120,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a1eb366d-4b7c-4900-9e2e-83ebcee3d015,},Annotations:map[string]string{io.kubernetes.container.hash: 864f1847,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c72449596055b292210d26b10cf658612b05649ff8f2c36e630a0d6eb77c165e,PodSandboxId:3c92abb5f30c156c368e3f4ee98df4118d1d7fc5895dac1ec68be42d7a7c2932,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706644827288043840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx9f7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95d8777b-0e61-4662-a7a6-1fb5e7b4
ae29,},Annotations:map[string]string{io.kubernetes.container.hash: abb540aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8295c64db63bbe1d715acc035968b7f49bc6d6f86510683cd26cf990b4b9884,PodSandboxId:758977e7d619f28c91d6bf057cf106caa772921e3d560db4628d23e0d40aff32,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706644820953624553,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-572652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d195cc1c68274636debff677374c054,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: a4c49fba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d61da25db0bf1ee89777b06f1c63495712f70cb407815a697af8f1ebf47d8110,PodSandboxId:181e301e6b1e38061a8cb2aecd0027f992c0ecbc4d2019149326d9acd098330f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706644821074763250,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-572652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e85fa7283981ab3a029cbc7c4cbcc1,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28562e55286dbd1b6b2542e329a4c8a63bd25b1150086b965ebc3ce9b1b03ee,PodSandboxId:5461122dc33b880c598609bc34f9d945796afdc025ab64d6a5a5fcca52f6a50f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706644820514540266,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-572652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6f18dcbbdea790709196864d2f77f8b,},Annotations:map[string]string{io.kubernetes.container.hash: bdedfea2,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d21225d28655d404694cddea91147b6fd3113009f8624387b35c1bc0d20109d,PodSandboxId:7675e7f1c3fb00265729af16bca58095ebc7d7ddf049f1d0ff7b414e8cb8d4fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706644820407437650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-572652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7787439db55e175a329eec0f92a7a11,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3c7a8e9f-82cf-4362-a835-2e60fa6fbaf9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.138758362Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9ebfaf56-5ee8-4759-b0a8-492461519ade name=/runtime.v1.RuntimeService/Version
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.138839582Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9ebfaf56-5ee8-4759-b0a8-492461519ade name=/runtime.v1.RuntimeService/Version
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.140406970Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=89a7595c-7adb-4520-98c5-d58a0bb8788a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.140879313Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706645054140864801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=89a7595c-7adb-4520-98c5-d58a0bb8788a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.141282463Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=088d61bc-2bef-4812-a580-602c2e42c165 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.141322597Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=088d61bc-2bef-4812-a580-602c2e42c165 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.141528074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7e176929c4f6c6167440567edc91fc26bcf3b2e2a09b2cfdde763ef2e93a94a,PodSandboxId:d746280ab49d75cb6db7908e4451d24bc1d2e68ff78397d81f3f969c3a198671,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706644858669197202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1eb366d-4b7c-4900-9e2e-83ebcee3d015,},Annotations:map[string]string{io.kubernetes.container.hash: 864f1847,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e943636a73f5b4af9be78c4c9242f22bdb11f9449a6df116a1e5c130a59a3928,PodSandboxId:199e41243845ea26f04b88f13217f8b9844bfefac0c73a53ceaa15f0d4f4cab6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1706644837945734490,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-sbgq8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cb8e004e-7fa8-4a85-b493-390e1fd29719,},Annotations:map[string]string{io.kubernetes.container.hash: c0f332e1,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a16232e2f6727bfb0ead92294bf7fbc5f60d396fb52c10aef27fbac6048c1c,PodSandboxId:1dc728ecf9786a676df9a91c0585df37a52993975e2e53982b1e7c2caf7954ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706644834955889206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-579fc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ed4a94c-417c-480d-9f9a-4101a5103066,},Annotations:map[string]string{io.kubernetes.container.hash: 983b021e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3caba374245a41a87b97bd759a0e1121ff27f3451e82b9c6e69b83b3522c02f,PodSandboxId:f0195166a9ac86d87d769fabab203becb0cda80bff513573069afed8870dd414,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1706644829838391967,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzx54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 87aab713-13c1-4fd2-bc90-73b2998226dc,},Annotations:map[string]string{io.kubernetes.container.hash: e843de79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:badc83321619bbef2f188a7fd69413ee5cd59498e3e8da1abb143690753485f8,PodSandboxId:d746280ab49d75cb6db7908e4451d24bc1d2e68ff78397d81f3f969c3a198671,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706644827393071120,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a1eb366d-4b7c-4900-9e2e-83ebcee3d015,},Annotations:map[string]string{io.kubernetes.container.hash: 864f1847,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c72449596055b292210d26b10cf658612b05649ff8f2c36e630a0d6eb77c165e,PodSandboxId:3c92abb5f30c156c368e3f4ee98df4118d1d7fc5895dac1ec68be42d7a7c2932,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706644827288043840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx9f7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95d8777b-0e61-4662-a7a6-1fb5e7b4
ae29,},Annotations:map[string]string{io.kubernetes.container.hash: abb540aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8295c64db63bbe1d715acc035968b7f49bc6d6f86510683cd26cf990b4b9884,PodSandboxId:758977e7d619f28c91d6bf057cf106caa772921e3d560db4628d23e0d40aff32,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706644820953624553,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-572652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d195cc1c68274636debff677374c054,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: a4c49fba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d61da25db0bf1ee89777b06f1c63495712f70cb407815a697af8f1ebf47d8110,PodSandboxId:181e301e6b1e38061a8cb2aecd0027f992c0ecbc4d2019149326d9acd098330f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706644821074763250,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-572652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e85fa7283981ab3a029cbc7c4cbcc1,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28562e55286dbd1b6b2542e329a4c8a63bd25b1150086b965ebc3ce9b1b03ee,PodSandboxId:5461122dc33b880c598609bc34f9d945796afdc025ab64d6a5a5fcca52f6a50f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706644820514540266,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-572652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6f18dcbbdea790709196864d2f77f8b,},Annotations:map[string]string{io.kubernetes.container.hash: bdedfea2,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d21225d28655d404694cddea91147b6fd3113009f8624387b35c1bc0d20109d,PodSandboxId:7675e7f1c3fb00265729af16bca58095ebc7d7ddf049f1d0ff7b414e8cb8d4fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706644820407437650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-572652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7787439db55e175a329eec0f92a7a11,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=088d61bc-2bef-4812-a580-602c2e42c165 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.188649217Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2c0c7785-f192-4734-b766-b1aeb4b6b00b name=/runtime.v1.RuntimeService/Version
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.188809066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2c0c7785-f192-4734-b766-b1aeb4b6b00b name=/runtime.v1.RuntimeService/Version
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.191362279Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c4f28a7b-1882-4c4c-af6c-03b3f38f623e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.191921632Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706645054191905868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c4f28a7b-1882-4c4c-af6c-03b3f38f623e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.193145691Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=48f0837b-4ea5-4d6d-a9fd-0743918173f6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.193296790Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=48f0837b-4ea5-4d6d-a9fd-0743918173f6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:04:14 multinode-572652 crio[713]: time="2024-01-30 20:04:14.193520163Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7e176929c4f6c6167440567edc91fc26bcf3b2e2a09b2cfdde763ef2e93a94a,PodSandboxId:d746280ab49d75cb6db7908e4451d24bc1d2e68ff78397d81f3f969c3a198671,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706644858669197202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1eb366d-4b7c-4900-9e2e-83ebcee3d015,},Annotations:map[string]string{io.kubernetes.container.hash: 864f1847,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e943636a73f5b4af9be78c4c9242f22bdb11f9449a6df116a1e5c130a59a3928,PodSandboxId:199e41243845ea26f04b88f13217f8b9844bfefac0c73a53ceaa15f0d4f4cab6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1706644837945734490,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-sbgq8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cb8e004e-7fa8-4a85-b493-390e1fd29719,},Annotations:map[string]string{io.kubernetes.container.hash: c0f332e1,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a16232e2f6727bfb0ead92294bf7fbc5f60d396fb52c10aef27fbac6048c1c,PodSandboxId:1dc728ecf9786a676df9a91c0585df37a52993975e2e53982b1e7c2caf7954ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706644834955889206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-579fc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ed4a94c-417c-480d-9f9a-4101a5103066,},Annotations:map[string]string{io.kubernetes.container.hash: 983b021e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3caba374245a41a87b97bd759a0e1121ff27f3451e82b9c6e69b83b3522c02f,PodSandboxId:f0195166a9ac86d87d769fabab203becb0cda80bff513573069afed8870dd414,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1706644829838391967,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzx54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 87aab713-13c1-4fd2-bc90-73b2998226dc,},Annotations:map[string]string{io.kubernetes.container.hash: e843de79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:badc83321619bbef2f188a7fd69413ee5cd59498e3e8da1abb143690753485f8,PodSandboxId:d746280ab49d75cb6db7908e4451d24bc1d2e68ff78397d81f3f969c3a198671,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706644827393071120,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a1eb366d-4b7c-4900-9e2e-83ebcee3d015,},Annotations:map[string]string{io.kubernetes.container.hash: 864f1847,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c72449596055b292210d26b10cf658612b05649ff8f2c36e630a0d6eb77c165e,PodSandboxId:3c92abb5f30c156c368e3f4ee98df4118d1d7fc5895dac1ec68be42d7a7c2932,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706644827288043840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx9f7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95d8777b-0e61-4662-a7a6-1fb5e7b4
ae29,},Annotations:map[string]string{io.kubernetes.container.hash: abb540aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8295c64db63bbe1d715acc035968b7f49bc6d6f86510683cd26cf990b4b9884,PodSandboxId:758977e7d619f28c91d6bf057cf106caa772921e3d560db4628d23e0d40aff32,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706644820953624553,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-572652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d195cc1c68274636debff677374c054,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: a4c49fba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d61da25db0bf1ee89777b06f1c63495712f70cb407815a697af8f1ebf47d8110,PodSandboxId:181e301e6b1e38061a8cb2aecd0027f992c0ecbc4d2019149326d9acd098330f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706644821074763250,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-572652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e85fa7283981ab3a029cbc7c4cbcc1,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28562e55286dbd1b6b2542e329a4c8a63bd25b1150086b965ebc3ce9b1b03ee,PodSandboxId:5461122dc33b880c598609bc34f9d945796afdc025ab64d6a5a5fcca52f6a50f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706644820514540266,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-572652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6f18dcbbdea790709196864d2f77f8b,},Annotations:map[string]string{io.kubernetes.container.hash: bdedfea2,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d21225d28655d404694cddea91147b6fd3113009f8624387b35c1bc0d20109d,PodSandboxId:7675e7f1c3fb00265729af16bca58095ebc7d7ddf049f1d0ff7b414e8cb8d4fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706644820407437650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-572652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7787439db55e175a329eec0f92a7a11,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=48f0837b-4ea5-4d6d-a9fd-0743918173f6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d7e176929c4f6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   d746280ab49d7       storage-provisioner
	e943636a73f5b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   199e41243845e       busybox-5b5d89c9d6-sbgq8
	f9a16232e2f67       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   1dc728ecf9786       coredns-5dd5756b68-579fc
	e3caba374245a       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   f0195166a9ac8       kindnet-rzx54
	badc83321619b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   d746280ab49d7       storage-provisioner
	c72449596055b       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   3c92abb5f30c1       kube-proxy-hx9f7
	d61da25db0bf1       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   181e301e6b1e3       kube-scheduler-multinode-572652
	b8295c64db63b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   758977e7d619f       etcd-multinode-572652
	b28562e55286d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   5461122dc33b8       kube-apiserver-multinode-572652
	4d21225d28655       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   7675e7f1c3fb0       kube-controller-manager-multinode-572652
	
	
	==> coredns [f9a16232e2f6727bfb0ead92294bf7fbc5f60d396fb52c10aef27fbac6048c1c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38576 - 3058 "HINFO IN 3657000026625397583.7074542211665374005. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021409744s
	
	
	==> describe nodes <==
	Name:               multinode-572652
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-572652
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218
	                    minikube.k8s.io/name=multinode-572652
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T19_50_01_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 19:49:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-572652
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 20:04:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 20:00:56 +0000   Tue, 30 Jan 2024 19:49:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 20:00:56 +0000   Tue, 30 Jan 2024 19:49:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 20:00:56 +0000   Tue, 30 Jan 2024 19:49:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 20:00:56 +0000   Tue, 30 Jan 2024 20:00:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    multinode-572652
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfa7fbeb149d4a57be7f207808c12a2c
	  System UUID:                bfa7fbeb-149d-4a57-be7f-207808c12a2c
	  Boot ID:                    f300ad52-9838-4bcc-bc35-8ad9b80c8311
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-sbgq8                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-579fc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-multinode-572652                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-rzx54                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-multinode-572652             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-572652    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-hx9f7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-multinode-572652             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 3m46s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-572652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-572652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-572652 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           14m                    node-controller  Node multinode-572652 event: Registered Node multinode-572652 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-572652 status is now: NodeReady
	  Normal  Starting                 3m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m55s (x8 over 3m55s)  kubelet          Node multinode-572652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m55s (x8 over 3m55s)  kubelet          Node multinode-572652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m55s (x7 over 3m55s)  kubelet          Node multinode-572652 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m36s                  node-controller  Node multinode-572652 event: Registered Node multinode-572652 in Controller
	
	
	Name:               multinode-572652-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-572652-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218
	                    minikube.k8s.io/name=multinode-572652
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_30T20_04_10_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 20:02:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-572652-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 20:04:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 20:02:26 +0000   Tue, 30 Jan 2024 20:02:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 20:02:26 +0000   Tue, 30 Jan 2024 20:02:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 20:02:26 +0000   Tue, 30 Jan 2024 20:02:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 20:02:26 +0000   Tue, 30 Jan 2024 20:02:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    multinode-572652-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 1da4b0235b854b5fb7a077db323439aa
	  System UUID:                1da4b023-5b85-4b5f-b7a0-77db323439aa
	  Boot ID:                    cb074093-2003-4ff2-a6c4-e8eb615fed84
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-w46sz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-w5jvc               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-rbwvp            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 106s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-572652-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-572652-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-572652-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet     Node multinode-572652-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m55s                  kubelet     Node multinode-572652-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m23s (x2 over 3m23s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 108s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  108s (x2 over 108s)    kubelet     Node multinode-572652-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    108s (x2 over 108s)    kubelet     Node multinode-572652-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     108s (x2 over 108s)    kubelet     Node multinode-572652-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  108s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                108s                   kubelet     Node multinode-572652-m02 status is now: NodeReady
	
	
	Name:               multinode-572652-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-572652-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218
	                    minikube.k8s.io/name=multinode-572652
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_30T20_04_10_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 20:04:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-572652-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 20:04:09 +0000   Tue, 30 Jan 2024 20:04:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 20:04:09 +0000   Tue, 30 Jan 2024 20:04:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 20:04:09 +0000   Tue, 30 Jan 2024 20:04:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 20:04:09 +0000   Tue, 30 Jan 2024 20:04:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    multinode-572652-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 9a2a496fae7945419024cb353fb5de14
	  System UUID:                9a2a496f-ae79-4541-9024-cb353fb5de14
	  Boot ID:                    10b721da-2d06-4240-8c82-bb386fbeae14
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-lfjc4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kindnet-srbck               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-j5sr4            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 3s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-572652-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-572652-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-572652-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-572652-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node multinode-572652-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node multinode-572652-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node multinode-572652-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                kubelet     Node multinode-572652-m03 status is now: NodeReady
	  Normal   NodeNotReady             75s                kubelet     Node multinode-572652-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        47s                kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 5s                 kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)    kubelet     Node multinode-572652-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)    kubelet     Node multinode-572652-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                 kubelet     Node multinode-572652-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)    kubelet     Node multinode-572652-m03 status is now: NodeHasSufficientMemory
	
	
	==> dmesg <==
	[Jan30 19:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067043] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.361190] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.488523] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.136883] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.443617] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan30 20:00] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.101469] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.134806] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.112843] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.213858] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[ +17.025897] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	
	
	==> etcd [b8295c64db63bbe1d715acc035968b7f49bc6d6f86510683cd26cf990b4b9884] <==
	{"level":"info","ts":"2024-01-30T20:00:22.57614Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-30T20:00:22.576151Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-30T20:00:22.576467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 switched to configuration voters=(2016870896152654549)"}
	{"level":"info","ts":"2024-01-30T20:00:22.576545Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7d06a36b1777ee5c","local-member-id":"1bfd5d64eb00b2d5","added-peer-id":"1bfd5d64eb00b2d5","added-peer-peer-urls":["https://192.168.39.186:2380"]}
	{"level":"info","ts":"2024-01-30T20:00:22.576625Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7d06a36b1777ee5c","local-member-id":"1bfd5d64eb00b2d5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T20:00:22.57665Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T20:00:22.588477Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-30T20:00:22.590764Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"1bfd5d64eb00b2d5","initial-advertise-peer-urls":["https://192.168.39.186:2380"],"listen-peer-urls":["https://192.168.39.186:2380"],"advertise-client-urls":["https://192.168.39.186:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.186:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-30T20:00:22.590869Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-30T20:00:22.59102Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.186:2380"}
	{"level":"info","ts":"2024-01-30T20:00:22.591057Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.186:2380"}
	{"level":"info","ts":"2024-01-30T20:00:24.328793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-30T20:00:24.328955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-30T20:00:24.32902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 received MsgPreVoteResp from 1bfd5d64eb00b2d5 at term 2"}
	{"level":"info","ts":"2024-01-30T20:00:24.329056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became candidate at term 3"}
	{"level":"info","ts":"2024-01-30T20:00:24.32908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 received MsgVoteResp from 1bfd5d64eb00b2d5 at term 3"}
	{"level":"info","ts":"2024-01-30T20:00:24.329107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became leader at term 3"}
	{"level":"info","ts":"2024-01-30T20:00:24.329133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1bfd5d64eb00b2d5 elected leader 1bfd5d64eb00b2d5 at term 3"}
	{"level":"info","ts":"2024-01-30T20:00:24.331853Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1bfd5d64eb00b2d5","local-member-attributes":"{Name:multinode-572652 ClientURLs:[https://192.168.39.186:2379]}","request-path":"/0/members/1bfd5d64eb00b2d5/attributes","cluster-id":"7d06a36b1777ee5c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-30T20:00:24.331882Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T20:00:24.332137Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-30T20:00:24.332189Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-30T20:00:24.331909Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T20:00:24.333555Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-30T20:00:24.333922Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.186:2379"}
	
	
	==> kernel <==
	 20:04:14 up 4 min,  0 users,  load average: 0.20, 0.17, 0.09
	Linux multinode-572652 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [e3caba374245a41a87b97bd759a0e1121ff27f3451e82b9c6e69b83b3522c02f] <==
	I0130 20:03:41.476273       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0130 20:03:41.476372       1 main.go:227] handling current node
	I0130 20:03:41.476397       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0130 20:03:41.476428       1 main.go:250] Node multinode-572652-m02 has CIDR [10.244.1.0/24] 
	I0130 20:03:41.476527       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0130 20:03:41.476546       1 main.go:250] Node multinode-572652-m03 has CIDR [10.244.3.0/24] 
	I0130 20:03:51.490084       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0130 20:03:51.490247       1 main.go:227] handling current node
	I0130 20:03:51.490276       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0130 20:03:51.490295       1 main.go:250] Node multinode-572652-m02 has CIDR [10.244.1.0/24] 
	I0130 20:03:51.490407       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0130 20:03:51.490428       1 main.go:250] Node multinode-572652-m03 has CIDR [10.244.3.0/24] 
	I0130 20:04:01.598413       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0130 20:04:01.598533       1 main.go:227] handling current node
	I0130 20:04:01.598593       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0130 20:04:01.598600       1 main.go:250] Node multinode-572652-m02 has CIDR [10.244.1.0/24] 
	I0130 20:04:01.598982       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0130 20:04:01.598991       1 main.go:250] Node multinode-572652-m03 has CIDR [10.244.3.0/24] 
	I0130 20:04:11.604076       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0130 20:04:11.604242       1 main.go:227] handling current node
	I0130 20:04:11.604270       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0130 20:04:11.604290       1 main.go:250] Node multinode-572652-m02 has CIDR [10.244.1.0/24] 
	I0130 20:04:11.604406       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0130 20:04:11.604427       1 main.go:250] Node multinode-572652-m03 has CIDR [10.244.2.0/24] 
	I0130 20:04:11.604491       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.58 Flags: [] Table: 0} 
	
	
	==> kube-apiserver [b28562e55286dbd1b6b2542e329a4c8a63bd25b1150086b965ebc3ce9b1b03ee] <==
	I0130 20:00:25.709297       1 establishing_controller.go:76] Starting EstablishingController
	I0130 20:00:25.709307       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0130 20:00:25.709316       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0130 20:00:25.709400       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0130 20:00:25.792472       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0130 20:00:25.841065       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0130 20:00:25.842756       1 shared_informer.go:318] Caches are synced for configmaps
	I0130 20:00:25.842830       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0130 20:00:25.842990       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0130 20:00:25.843026       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0130 20:00:25.843098       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0130 20:00:25.843738       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0130 20:00:25.844305       1 aggregator.go:166] initial CRD sync complete...
	I0130 20:00:25.844342       1 autoregister_controller.go:141] Starting autoregister controller
	I0130 20:00:25.844348       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0130 20:00:25.844354       1 cache.go:39] Caches are synced for autoregister controller
	I0130 20:00:25.850982       1 shared_informer.go:318] Caches are synced for node_authorizer
	E0130 20:00:25.852498       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0130 20:00:26.652550       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0130 20:00:28.652847       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0130 20:00:28.798144       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0130 20:00:28.810496       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0130 20:00:28.882161       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0130 20:00:28.889003       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0130 20:01:16.210976       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4d21225d28655d404694cddea91147b6fd3113009f8624387b35c1bc0d20109d] <==
	I0130 20:02:26.479977       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-572652-m02\" does not exist"
	I0130 20:02:26.480041       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-572652-m03"
	I0130 20:02:26.480285       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-f2vmn" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-f2vmn"
	I0130 20:02:26.495790       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-572652-m02" podCIDRs=["10.244.1.0/24"]
	I0130 20:02:26.519048       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-572652-m03"
	I0130 20:02:26.529221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.681704ms"
	I0130 20:02:26.529375       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.915µs"
	I0130 20:02:27.441512       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="40.052µs"
	I0130 20:02:40.649172       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="71.626µs"
	I0130 20:02:41.229521       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="78.927µs"
	I0130 20:02:41.232089       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="65.856µs"
	I0130 20:02:59.076025       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-572652-m02"
	I0130 20:04:05.917872       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-w46sz"
	I0130 20:04:05.938090       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="35.917077ms"
	I0130 20:04:05.969339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="31.148625ms"
	I0130 20:04:05.969467       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="58.276µs"
	I0130 20:04:07.492846       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.918647ms"
	I0130 20:04:07.493214       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="69.436µs"
	I0130 20:04:08.927117       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-572652-m02"
	I0130 20:04:09.598284       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-572652-m02"
	I0130 20:04:09.600420       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-lfjc4" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-lfjc4"
	I0130 20:04:09.600822       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-572652-m03\" does not exist"
	I0130 20:04:09.616353       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-572652-m03" podCIDRs=["10.244.2.0/24"]
	I0130 20:04:09.741284       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-572652-m02"
	I0130 20:04:10.571495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="87.102µs"
	
	
	==> kube-proxy [c72449596055b292210d26b10cf658612b05649ff8f2c36e630a0d6eb77c165e] <==
	I0130 20:00:27.594793       1 server_others.go:69] "Using iptables proxy"
	I0130 20:00:27.606141       1 node.go:141] Successfully retrieved node IP: 192.168.39.186
	I0130 20:00:27.763854       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0130 20:00:27.763900       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0130 20:00:27.893236       1 server_others.go:152] "Using iptables Proxier"
	I0130 20:00:27.893298       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0130 20:00:27.893794       1 server.go:846] "Version info" version="v1.28.4"
	I0130 20:00:27.893805       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 20:00:27.919497       1 config.go:188] "Starting service config controller"
	I0130 20:00:27.919539       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0130 20:00:27.919559       1 config.go:97] "Starting endpoint slice config controller"
	I0130 20:00:27.919563       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0130 20:00:27.920213       1 config.go:315] "Starting node config controller"
	I0130 20:00:27.920221       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0130 20:00:28.021877       1 shared_informer.go:318] Caches are synced for node config
	I0130 20:00:28.022011       1 shared_informer.go:318] Caches are synced for service config
	I0130 20:00:28.022104       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d61da25db0bf1ee89777b06f1c63495712f70cb407815a697af8f1ebf47d8110] <==
	I0130 20:00:22.711189       1 serving.go:348] Generated self-signed cert in-memory
	W0130 20:00:25.732982       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0130 20:00:25.733146       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 20:00:25.733254       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0130 20:00:25.733281       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0130 20:00:25.797574       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0130 20:00:25.797651       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 20:00:25.800031       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0130 20:00:25.800075       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0130 20:00:25.803367       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0130 20:00:25.803658       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0130 20:00:25.901203       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 19:59:51 UTC, ends at Tue 2024-01-30 20:04:14 UTC. --
	Jan 30 20:00:28 multinode-572652 kubelet[917]: E0130 20:00:28.070473     917 projected.go:198] Error preparing data for projected volume kube-api-access-pcmlg for pod default/busybox-5b5d89c9d6-sbgq8: object "default"/"kube-root-ca.crt" not registered
	Jan 30 20:00:28 multinode-572652 kubelet[917]: E0130 20:00:28.070521     917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cb8e004e-7fa8-4a85-b493-390e1fd29719-kube-api-access-pcmlg podName:cb8e004e-7fa8-4a85-b493-390e1fd29719 nodeName:}" failed. No retries permitted until 2024-01-30 20:00:30.070508 +0000 UTC m=+10.867249318 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pcmlg" (UniqueName: "kubernetes.io/projected/cb8e004e-7fa8-4a85-b493-390e1fd29719-kube-api-access-pcmlg") pod "busybox-5b5d89c9d6-sbgq8" (UID: "cb8e004e-7fa8-4a85-b493-390e1fd29719") : object "default"/"kube-root-ca.crt" not registered
	Jan 30 20:00:28 multinode-572652 kubelet[917]: E0130 20:00:28.478851     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-579fc" podUID="8ed4a94c-417c-480d-9f9a-4101a5103066"
	Jan 30 20:00:29 multinode-572652 kubelet[917]: E0130 20:00:29.480197     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5b5d89c9d6-sbgq8" podUID="cb8e004e-7fa8-4a85-b493-390e1fd29719"
	Jan 30 20:00:29 multinode-572652 kubelet[917]: E0130 20:00:29.985239     917 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 30 20:00:29 multinode-572652 kubelet[917]: E0130 20:00:29.985378     917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8ed4a94c-417c-480d-9f9a-4101a5103066-config-volume podName:8ed4a94c-417c-480d-9f9a-4101a5103066 nodeName:}" failed. No retries permitted until 2024-01-30 20:00:33.98530067 +0000 UTC m=+14.782042000 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8ed4a94c-417c-480d-9f9a-4101a5103066-config-volume") pod "coredns-5dd5756b68-579fc" (UID: "8ed4a94c-417c-480d-9f9a-4101a5103066") : object "kube-system"/"coredns" not registered
	Jan 30 20:00:30 multinode-572652 kubelet[917]: E0130 20:00:30.085977     917 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jan 30 20:00:30 multinode-572652 kubelet[917]: E0130 20:00:30.086006     917 projected.go:198] Error preparing data for projected volume kube-api-access-pcmlg for pod default/busybox-5b5d89c9d6-sbgq8: object "default"/"kube-root-ca.crt" not registered
	Jan 30 20:00:30 multinode-572652 kubelet[917]: E0130 20:00:30.086050     917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cb8e004e-7fa8-4a85-b493-390e1fd29719-kube-api-access-pcmlg podName:cb8e004e-7fa8-4a85-b493-390e1fd29719 nodeName:}" failed. No retries permitted until 2024-01-30 20:00:34.086037686 +0000 UTC m=+14.882779016 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pcmlg" (UniqueName: "kubernetes.io/projected/cb8e004e-7fa8-4a85-b493-390e1fd29719-kube-api-access-pcmlg") pod "busybox-5b5d89c9d6-sbgq8" (UID: "cb8e004e-7fa8-4a85-b493-390e1fd29719") : object "default"/"kube-root-ca.crt" not registered
	Jan 30 20:00:30 multinode-572652 kubelet[917]: E0130 20:00:30.479339     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-579fc" podUID="8ed4a94c-417c-480d-9f9a-4101a5103066"
	Jan 30 20:00:31 multinode-572652 kubelet[917]: E0130 20:00:31.478775     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5b5d89c9d6-sbgq8" podUID="cb8e004e-7fa8-4a85-b493-390e1fd29719"
	Jan 30 20:00:31 multinode-572652 kubelet[917]: I0130 20:00:31.870231     917 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 30 20:00:58 multinode-572652 kubelet[917]: I0130 20:00:58.645285     917 scope.go:117] "RemoveContainer" containerID="badc83321619bbef2f188a7fd69413ee5cd59498e3e8da1abb143690753485f8"
	Jan 30 20:01:19 multinode-572652 kubelet[917]: E0130 20:01:19.496390     917 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 20:01:19 multinode-572652 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 20:01:19 multinode-572652 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:01:19 multinode-572652 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 20:02:19 multinode-572652 kubelet[917]: E0130 20:02:19.495576     917 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 20:02:19 multinode-572652 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 20:02:19 multinode-572652 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:02:19 multinode-572652 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 20:03:19 multinode-572652 kubelet[917]: E0130 20:03:19.496353     917 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 20:03:19 multinode-572652 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 20:03:19 multinode-572652 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:03:19 multinode-572652 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-572652 -n multinode-572652
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-572652 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (694.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (142.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 stop
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-572652 stop: exit status 82 (2m0.27030385s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-572652"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-572652 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 status
E0130 20:06:31.182642   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-572652 status: exit status 3 (18.80702403s)

                                                
                                                
-- stdout --
	multinode-572652
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-572652-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 20:06:36.435562   30497 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host
	E0130 20:06:36.435596   30497 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-572652 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-572652 -n multinode-572652
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-572652 -n multinode-572652: exit status 3 (3.162166918s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 20:06:39.763601   30604 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host
	E0130 20:06:39.763624   30604 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-572652" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (142.24s)

                                                
                                    
x
+
TestPreload (276.54s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-123750 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0130 20:16:10.819567   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 20:16:31.181883   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-123750 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m14.054344149s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-123750 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-123750 image pull gcr.io/k8s-minikube/busybox: (2.701135314s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-123750
E0130 20:18:07.773981   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 20:18:39.710732   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-123750: exit status 82 (2m0.271920141s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-123750"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-123750 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2024-01-30 20:19:15.553859794 +0000 UTC m=+3392.630829565
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-123750 -n test-preload-123750
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-123750 -n test-preload-123750: exit status 3 (18.614202063s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 20:19:34.163590   34051 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.214:22: connect: no route to host
	E0130 20:19:34.163611   34051 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.214:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-123750" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-123750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-123750
E0130 20:19:34.228526   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
--- FAIL: TestPreload (276.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-473743 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-473743 --alsologtostderr -v=3: exit status 82 (2m0.294592243s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-473743"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 20:30:54.644415   44128 out.go:296] Setting OutFile to fd 1 ...
	I0130 20:30:54.644617   44128 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:30:54.644637   44128 out.go:309] Setting ErrFile to fd 2...
	I0130 20:30:54.644648   44128 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:30:54.644821   44128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 20:30:54.645053   44128 out.go:303] Setting JSON to false
	I0130 20:30:54.645153   44128 mustload.go:65] Loading cluster: no-preload-473743
	I0130 20:30:54.645550   44128 config.go:182] Loaded profile config "no-preload-473743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 20:30:54.645677   44128 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/config.json ...
	I0130 20:30:54.645864   44128 mustload.go:65] Loading cluster: no-preload-473743
	I0130 20:30:54.646029   44128 config.go:182] Loaded profile config "no-preload-473743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 20:30:54.646084   44128 stop.go:39] StopHost: no-preload-473743
	I0130 20:30:54.646493   44128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:30:54.646567   44128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:30:54.663553   44128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40279
	I0130 20:30:54.664411   44128 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:30:54.665020   44128 main.go:141] libmachine: Using API Version  1
	I0130 20:30:54.665041   44128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:30:54.665375   44128 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:30:54.667405   44128 out.go:177] * Stopping node "no-preload-473743"  ...
	I0130 20:30:54.669245   44128 main.go:141] libmachine: Stopping "no-preload-473743"...
	I0130 20:30:54.669266   44128 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:30:54.671454   44128 main.go:141] libmachine: (no-preload-473743) Calling .Stop
	I0130 20:30:54.675598   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 0/120
	I0130 20:30:55.677827   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 1/120
	I0130 20:30:56.679462   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 2/120
	I0130 20:30:57.681888   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 3/120
	I0130 20:30:58.683073   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 4/120
	I0130 20:30:59.684984   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 5/120
	I0130 20:31:00.686243   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 6/120
	I0130 20:31:01.687689   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 7/120
	I0130 20:31:02.689146   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 8/120
	I0130 20:31:03.690395   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 9/120
	I0130 20:31:04.692504   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 10/120
	I0130 20:31:05.693919   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 11/120
	I0130 20:31:06.695740   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 12/120
	I0130 20:31:07.697309   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 13/120
	I0130 20:31:08.699056   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 14/120
	I0130 20:31:09.701065   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 15/120
	I0130 20:31:10.702631   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 16/120
	I0130 20:31:11.704030   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 17/120
	I0130 20:31:12.705877   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 18/120
	I0130 20:31:13.707167   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 19/120
	I0130 20:31:14.709150   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 20/120
	I0130 20:31:15.710698   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 21/120
	I0130 20:31:16.712515   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 22/120
	I0130 20:31:17.714235   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 23/120
	I0130 20:31:18.715570   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 24/120
	I0130 20:31:19.717403   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 25/120
	I0130 20:31:20.718665   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 26/120
	I0130 20:31:21.719942   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 27/120
	I0130 20:31:22.721782   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 28/120
	I0130 20:31:23.722895   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 29/120
	I0130 20:31:24.724882   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 30/120
	I0130 20:31:25.726196   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 31/120
	I0130 20:31:26.728028   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 32/120
	I0130 20:31:27.729333   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 33/120
	I0130 20:31:28.731475   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 34/120
	I0130 20:31:29.733889   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 35/120
	I0130 20:31:30.735005   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 36/120
	I0130 20:31:31.736516   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 37/120
	I0130 20:31:32.737878   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 38/120
	I0130 20:31:33.739779   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 39/120
	I0130 20:31:34.741878   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 40/120
	I0130 20:31:35.742975   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 41/120
	I0130 20:31:36.744257   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 42/120
	I0130 20:31:37.745247   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 43/120
	I0130 20:31:38.746423   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 44/120
	I0130 20:31:39.747878   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 45/120
	I0130 20:31:40.749816   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 46/120
	I0130 20:31:41.751777   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 47/120
	I0130 20:31:42.753688   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 48/120
	I0130 20:31:43.755058   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 49/120
	I0130 20:31:44.757222   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 50/120
	I0130 20:31:45.758766   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 51/120
	I0130 20:31:46.760688   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 52/120
	I0130 20:31:47.761790   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 53/120
	I0130 20:31:48.762939   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 54/120
	I0130 20:31:49.765049   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 55/120
	I0130 20:31:50.766718   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 56/120
	I0130 20:31:51.768653   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 57/120
	I0130 20:31:52.769911   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 58/120
	I0130 20:31:53.771240   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 59/120
	I0130 20:31:54.772745   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 60/120
	I0130 20:31:55.774267   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 61/120
	I0130 20:31:56.775498   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 62/120
	I0130 20:31:57.777433   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 63/120
	I0130 20:31:58.778607   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 64/120
	I0130 20:31:59.780399   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 65/120
	I0130 20:32:00.782189   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 66/120
	I0130 20:32:01.783404   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 67/120
	I0130 20:32:02.785736   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 68/120
	I0130 20:32:03.787072   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 69/120
	I0130 20:32:04.789178   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 70/120
	I0130 20:32:05.790560   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 71/120
	I0130 20:32:06.792153   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 72/120
	I0130 20:32:07.793440   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 73/120
	I0130 20:32:08.794661   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 74/120
	I0130 20:32:09.796317   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 75/120
	I0130 20:32:10.797649   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 76/120
	I0130 20:32:11.799076   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 77/120
	I0130 20:32:12.800397   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 78/120
	I0130 20:32:13.801722   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 79/120
	I0130 20:32:14.803882   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 80/120
	I0130 20:32:15.805147   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 81/120
	I0130 20:32:16.806402   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 82/120
	I0130 20:32:17.807649   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 83/120
	I0130 20:32:18.809719   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 84/120
	I0130 20:32:19.811418   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 85/120
	I0130 20:32:20.813692   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 86/120
	I0130 20:32:21.814913   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 87/120
	I0130 20:32:22.816246   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 88/120
	I0130 20:32:23.817398   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 89/120
	I0130 20:32:24.819293   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 90/120
	I0130 20:32:25.820694   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 91/120
	I0130 20:32:26.821975   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 92/120
	I0130 20:32:27.823178   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 93/120
	I0130 20:32:28.824432   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 94/120
	I0130 20:32:29.826229   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 95/120
	I0130 20:32:30.827392   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 96/120
	I0130 20:32:31.828698   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 97/120
	I0130 20:32:32.829925   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 98/120
	I0130 20:32:33.831289   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 99/120
	I0130 20:32:34.833020   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 100/120
	I0130 20:32:35.834112   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 101/120
	I0130 20:32:36.835251   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 102/120
	I0130 20:32:37.836434   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 103/120
	I0130 20:32:38.838010   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 104/120
	I0130 20:32:39.839689   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 105/120
	I0130 20:32:40.840728   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 106/120
	I0130 20:32:41.842060   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 107/120
	I0130 20:32:42.843231   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 108/120
	I0130 20:32:43.844613   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 109/120
	I0130 20:32:44.846644   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 110/120
	I0130 20:32:45.847956   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 111/120
	I0130 20:32:46.849448   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 112/120
	I0130 20:32:47.850511   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 113/120
	I0130 20:32:48.851972   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 114/120
	I0130 20:32:49.853631   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 115/120
	I0130 20:32:50.854764   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 116/120
	I0130 20:32:51.855996   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 117/120
	I0130 20:32:52.857297   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 118/120
	I0130 20:32:53.858617   44128 main.go:141] libmachine: (no-preload-473743) Waiting for machine to stop 119/120
	I0130 20:32:54.859346   44128 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0130 20:32:54.859406   44128 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0130 20:32:54.861262   44128 out.go:177] 
	W0130 20:32:54.862492   44128 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0130 20:32:54.862508   44128 out.go:239] * 
	* 
	W0130 20:32:54.864932   44128 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 20:32:54.866159   44128 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-473743 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-473743 -n no-preload-473743
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-473743 -n no-preload-473743: exit status 3 (18.496355878s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 20:33:13.363572   44664 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.220:22: connect: no route to host
	E0130 20:33:13.363589   44664 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.220:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-473743" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-208583 --alsologtostderr -v=3
E0130 20:31:31.181888   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-208583 --alsologtostderr -v=3: exit status 82 (2m0.28164321s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-208583"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 20:31:02.580980   44216 out.go:296] Setting OutFile to fd 1 ...
	I0130 20:31:02.581125   44216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:31:02.581139   44216 out.go:309] Setting ErrFile to fd 2...
	I0130 20:31:02.581147   44216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:31:02.581343   44216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 20:31:02.581616   44216 out.go:303] Setting JSON to false
	I0130 20:31:02.581729   44216 mustload.go:65] Loading cluster: embed-certs-208583
	I0130 20:31:02.582120   44216 config.go:182] Loaded profile config "embed-certs-208583": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:31:02.582208   44216 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/config.json ...
	I0130 20:31:02.582383   44216 mustload.go:65] Loading cluster: embed-certs-208583
	I0130 20:31:02.582510   44216 config.go:182] Loaded profile config "embed-certs-208583": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:31:02.582555   44216 stop.go:39] StopHost: embed-certs-208583
	I0130 20:31:02.583034   44216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:31:02.583083   44216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:31:02.597991   44216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40665
	I0130 20:31:02.598445   44216 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:31:02.599029   44216 main.go:141] libmachine: Using API Version  1
	I0130 20:31:02.599056   44216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:31:02.599381   44216 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:31:02.602004   44216 out.go:177] * Stopping node "embed-certs-208583"  ...
	I0130 20:31:02.603846   44216 main.go:141] libmachine: Stopping "embed-certs-208583"...
	I0130 20:31:02.603862   44216 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:31:02.605637   44216 main.go:141] libmachine: (embed-certs-208583) Calling .Stop
	I0130 20:31:02.608963   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 0/120
	I0130 20:31:03.610416   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 1/120
	I0130 20:31:04.611628   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 2/120
	I0130 20:31:05.613842   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 3/120
	I0130 20:31:06.615352   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 4/120
	I0130 20:31:07.617189   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 5/120
	I0130 20:31:08.618718   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 6/120
	I0130 20:31:09.620396   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 7/120
	I0130 20:31:10.622259   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 8/120
	I0130 20:31:11.623873   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 9/120
	I0130 20:31:12.625806   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 10/120
	I0130 20:31:13.628068   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 11/120
	I0130 20:31:14.629881   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 12/120
	I0130 20:31:15.631315   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 13/120
	I0130 20:31:16.632672   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 14/120
	I0130 20:31:17.634500   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 15/120
	I0130 20:31:18.635937   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 16/120
	I0130 20:31:19.637426   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 17/120
	I0130 20:31:20.638769   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 18/120
	I0130 20:31:21.640077   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 19/120
	I0130 20:31:22.642307   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 20/120
	I0130 20:31:23.644264   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 21/120
	I0130 20:31:24.646529   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 22/120
	I0130 20:31:25.647782   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 23/120
	I0130 20:31:26.649600   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 24/120
	I0130 20:31:27.651750   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 25/120
	I0130 20:31:28.653093   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 26/120
	I0130 20:31:29.654344   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 27/120
	I0130 20:31:30.655692   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 28/120
	I0130 20:31:31.657058   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 29/120
	I0130 20:31:32.659033   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 30/120
	I0130 20:31:33.660380   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 31/120
	I0130 20:31:34.661777   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 32/120
	I0130 20:31:35.663481   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 33/120
	I0130 20:31:36.664859   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 34/120
	I0130 20:31:37.666715   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 35/120
	I0130 20:31:38.668245   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 36/120
	I0130 20:31:39.669598   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 37/120
	I0130 20:31:40.670785   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 38/120
	I0130 20:31:41.672324   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 39/120
	I0130 20:31:42.674340   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 40/120
	I0130 20:31:43.676118   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 41/120
	I0130 20:31:44.677766   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 42/120
	I0130 20:31:45.679019   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 43/120
	I0130 20:31:46.680205   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 44/120
	I0130 20:31:47.681968   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 45/120
	I0130 20:31:48.683188   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 46/120
	I0130 20:31:49.685416   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 47/120
	I0130 20:31:50.686566   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 48/120
	I0130 20:31:51.688375   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 49/120
	I0130 20:31:52.690448   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 50/120
	I0130 20:31:53.691834   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 51/120
	I0130 20:31:54.693118   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 52/120
	I0130 20:31:55.694316   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 53/120
	I0130 20:31:56.695895   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 54/120
	I0130 20:31:57.697627   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 55/120
	I0130 20:31:58.698856   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 56/120
	I0130 20:31:59.700184   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 57/120
	I0130 20:32:00.701897   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 58/120
	I0130 20:32:01.703549   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 59/120
	I0130 20:32:02.705711   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 60/120
	I0130 20:32:03.707281   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 61/120
	I0130 20:32:04.708833   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 62/120
	I0130 20:32:05.710194   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 63/120
	I0130 20:32:06.711623   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 64/120
	I0130 20:32:07.713582   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 65/120
	I0130 20:32:08.715024   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 66/120
	I0130 20:32:09.716333   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 67/120
	I0130 20:32:10.717740   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 68/120
	I0130 20:32:11.719147   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 69/120
	I0130 20:32:12.721257   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 70/120
	I0130 20:32:13.722508   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 71/120
	I0130 20:32:14.723705   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 72/120
	I0130 20:32:15.725109   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 73/120
	I0130 20:32:16.726526   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 74/120
	I0130 20:32:17.728413   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 75/120
	I0130 20:32:18.729861   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 76/120
	I0130 20:32:19.731097   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 77/120
	I0130 20:32:20.732262   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 78/120
	I0130 20:32:21.733764   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 79/120
	I0130 20:32:22.735535   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 80/120
	I0130 20:32:23.737662   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 81/120
	I0130 20:32:24.739110   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 82/120
	I0130 20:32:25.741345   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 83/120
	I0130 20:32:26.742565   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 84/120
	I0130 20:32:27.744085   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 85/120
	I0130 20:32:28.745665   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 86/120
	I0130 20:32:29.746857   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 87/120
	I0130 20:32:30.748033   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 88/120
	I0130 20:32:31.749958   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 89/120
	I0130 20:32:32.751848   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 90/120
	I0130 20:32:33.753639   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 91/120
	I0130 20:32:34.754802   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 92/120
	I0130 20:32:35.756055   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 93/120
	I0130 20:32:36.757198   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 94/120
	I0130 20:32:37.758849   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 95/120
	I0130 20:32:38.760072   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 96/120
	I0130 20:32:39.761337   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 97/120
	I0130 20:32:40.763110   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 98/120
	I0130 20:32:41.764487   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 99/120
	I0130 20:32:42.766344   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 100/120
	I0130 20:32:43.767519   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 101/120
	I0130 20:32:44.768692   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 102/120
	I0130 20:32:45.769829   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 103/120
	I0130 20:32:46.771075   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 104/120
	I0130 20:32:47.772877   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 105/120
	I0130 20:32:48.774250   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 106/120
	I0130 20:32:49.775682   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 107/120
	I0130 20:32:50.776796   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 108/120
	I0130 20:32:51.778055   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 109/120
	I0130 20:32:52.780287   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 110/120
	I0130 20:32:53.781615   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 111/120
	I0130 20:32:54.782795   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 112/120
	I0130 20:32:55.784012   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 113/120
	I0130 20:32:56.785242   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 114/120
	I0130 20:32:57.786929   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 115/120
	I0130 20:32:58.788080   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 116/120
	I0130 20:32:59.789385   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 117/120
	I0130 20:33:00.791146   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 118/120
	I0130 20:33:01.792466   44216 main.go:141] libmachine: (embed-certs-208583) Waiting for machine to stop 119/120
	I0130 20:33:02.793652   44216 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0130 20:33:02.793697   44216 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0130 20:33:02.795822   44216 out.go:177] 
	W0130 20:33:02.797482   44216 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0130 20:33:02.797499   44216 out.go:239] * 
	* 
	W0130 20:33:02.799741   44216 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 20:33:02.801064   44216 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-208583 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208583 -n embed-certs-208583
E0130 20:33:07.771330   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208583 -n embed-certs-208583: exit status 3 (18.497102294s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 20:33:21.299572   44716 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.63:22: connect: no route to host
	E0130 20:33:21.299594   44716 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.63:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-208583" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-877742 --alsologtostderr -v=3
E0130 20:32:50.820094   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-877742 --alsologtostderr -v=3: exit status 82 (2m0.266473817s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-877742"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 20:32:04.260003   44516 out.go:296] Setting OutFile to fd 1 ...
	I0130 20:32:04.260259   44516 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:32:04.260269   44516 out.go:309] Setting ErrFile to fd 2...
	I0130 20:32:04.260273   44516 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:32:04.260486   44516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 20:32:04.260752   44516 out.go:303] Setting JSON to false
	I0130 20:32:04.260852   44516 mustload.go:65] Loading cluster: default-k8s-diff-port-877742
	I0130 20:32:04.261216   44516 config.go:182] Loaded profile config "default-k8s-diff-port-877742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:32:04.261298   44516 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/config.json ...
	I0130 20:32:04.261480   44516 mustload.go:65] Loading cluster: default-k8s-diff-port-877742
	I0130 20:32:04.261608   44516 config.go:182] Loaded profile config "default-k8s-diff-port-877742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:32:04.261654   44516 stop.go:39] StopHost: default-k8s-diff-port-877742
	I0130 20:32:04.262067   44516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:32:04.262110   44516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:32:04.276241   44516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43231
	I0130 20:32:04.276663   44516 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:32:04.277170   44516 main.go:141] libmachine: Using API Version  1
	I0130 20:32:04.277191   44516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:32:04.277486   44516 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:32:04.279713   44516 out.go:177] * Stopping node "default-k8s-diff-port-877742"  ...
	I0130 20:32:04.280883   44516 main.go:141] libmachine: Stopping "default-k8s-diff-port-877742"...
	I0130 20:32:04.280900   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:32:04.282271   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Stop
	I0130 20:32:04.285471   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 0/120
	I0130 20:32:05.286904   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 1/120
	I0130 20:32:06.288013   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 2/120
	I0130 20:32:07.289777   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 3/120
	I0130 20:32:08.291174   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 4/120
	I0130 20:32:09.293086   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 5/120
	I0130 20:32:10.294355   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 6/120
	I0130 20:32:11.295450   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 7/120
	I0130 20:32:12.296750   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 8/120
	I0130 20:32:13.297937   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 9/120
	I0130 20:32:14.300145   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 10/120
	I0130 20:32:15.301473   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 11/120
	I0130 20:32:16.302709   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 12/120
	I0130 20:32:17.304060   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 13/120
	I0130 20:32:18.305431   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 14/120
	I0130 20:32:19.307293   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 15/120
	I0130 20:32:20.308395   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 16/120
	I0130 20:32:21.309622   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 17/120
	I0130 20:32:22.311703   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 18/120
	I0130 20:32:23.312901   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 19/120
	I0130 20:32:24.314754   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 20/120
	I0130 20:32:25.315887   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 21/120
	I0130 20:32:26.317263   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 22/120
	I0130 20:32:27.318394   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 23/120
	I0130 20:32:28.319747   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 24/120
	I0130 20:32:29.322068   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 25/120
	I0130 20:32:30.323552   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 26/120
	I0130 20:32:31.324836   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 27/120
	I0130 20:32:32.325895   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 28/120
	I0130 20:32:33.327201   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 29/120
	I0130 20:32:34.329031   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 30/120
	I0130 20:32:35.330350   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 31/120
	I0130 20:32:36.331501   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 32/120
	I0130 20:32:37.333628   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 33/120
	I0130 20:32:38.335349   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 34/120
	I0130 20:32:39.337097   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 35/120
	I0130 20:32:40.338310   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 36/120
	I0130 20:32:41.339639   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 37/120
	I0130 20:32:42.341681   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 38/120
	I0130 20:32:43.343119   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 39/120
	I0130 20:32:44.345215   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 40/120
	I0130 20:32:45.346558   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 41/120
	I0130 20:32:46.347783   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 42/120
	I0130 20:32:47.349641   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 43/120
	I0130 20:32:48.350918   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 44/120
	I0130 20:32:49.352660   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 45/120
	I0130 20:32:50.353970   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 46/120
	I0130 20:32:51.355257   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 47/120
	I0130 20:32:52.356572   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 48/120
	I0130 20:32:53.357731   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 49/120
	I0130 20:32:54.359832   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 50/120
	I0130 20:32:55.360989   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 51/120
	I0130 20:32:56.362303   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 52/120
	I0130 20:32:57.363715   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 53/120
	I0130 20:32:58.365639   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 54/120
	I0130 20:32:59.367244   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 55/120
	I0130 20:33:00.368456   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 56/120
	I0130 20:33:01.369656   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 57/120
	I0130 20:33:02.371030   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 58/120
	I0130 20:33:03.372338   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 59/120
	I0130 20:33:04.374425   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 60/120
	I0130 20:33:05.375890   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 61/120
	I0130 20:33:06.377484   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 62/120
	I0130 20:33:07.378657   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 63/120
	I0130 20:33:08.380322   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 64/120
	I0130 20:33:09.382017   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 65/120
	I0130 20:33:10.383339   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 66/120
	I0130 20:33:11.384509   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 67/120
	I0130 20:33:12.385772   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 68/120
	I0130 20:33:13.386945   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 69/120
	I0130 20:33:14.388868   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 70/120
	I0130 20:33:15.390295   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 71/120
	I0130 20:33:16.392416   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 72/120
	I0130 20:33:17.393865   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 73/120
	I0130 20:33:18.395261   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 74/120
	I0130 20:33:19.396995   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 75/120
	I0130 20:33:20.399443   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 76/120
	I0130 20:33:21.401234   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 77/120
	I0130 20:33:22.402510   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 78/120
	I0130 20:33:23.403980   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 79/120
	I0130 20:33:24.405417   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 80/120
	I0130 20:33:25.406860   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 81/120
	I0130 20:33:26.408273   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 82/120
	I0130 20:33:27.409699   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 83/120
	I0130 20:33:28.411063   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 84/120
	I0130 20:33:29.413066   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 85/120
	I0130 20:33:30.414381   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 86/120
	I0130 20:33:31.415809   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 87/120
	I0130 20:33:32.417146   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 88/120
	I0130 20:33:33.418445   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 89/120
	I0130 20:33:34.420523   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 90/120
	I0130 20:33:35.421971   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 91/120
	I0130 20:33:36.423300   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 92/120
	I0130 20:33:37.425519   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 93/120
	I0130 20:33:38.426747   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 94/120
	I0130 20:33:39.428507   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 95/120
	I0130 20:33:40.429741   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 96/120
	I0130 20:33:41.430977   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 97/120
	I0130 20:33:42.432241   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 98/120
	I0130 20:33:43.433341   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 99/120
	I0130 20:33:44.435295   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 100/120
	I0130 20:33:45.436554   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 101/120
	I0130 20:33:46.437775   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 102/120
	I0130 20:33:47.438983   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 103/120
	I0130 20:33:48.440100   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 104/120
	I0130 20:33:49.441944   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 105/120
	I0130 20:33:50.443090   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 106/120
	I0130 20:33:51.444429   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 107/120
	I0130 20:33:52.445435   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 108/120
	I0130 20:33:53.446546   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 109/120
	I0130 20:33:54.448465   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 110/120
	I0130 20:33:55.449619   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 111/120
	I0130 20:33:56.450642   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 112/120
	I0130 20:33:57.451848   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 113/120
	I0130 20:33:58.453348   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 114/120
	I0130 20:33:59.455105   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 115/120
	I0130 20:34:00.456858   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 116/120
	I0130 20:34:01.458013   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 117/120
	I0130 20:34:02.459220   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 118/120
	I0130 20:34:03.460582   44516 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for machine to stop 119/120
	I0130 20:34:04.461697   44516 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0130 20:34:04.461765   44516 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0130 20:34:04.463534   44516 out.go:177] 
	W0130 20:34:04.464802   44516 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0130 20:34:04.464816   44516 out.go:239] * 
	* 
	W0130 20:34:04.467117   44516 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 20:34:04.468571   44516 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-877742 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-877742 -n default-k8s-diff-port-877742
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-877742 -n default-k8s-diff-port-877742: exit status 3 (18.525441704s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 20:34:22.995550   45226 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.52:22: connect: no route to host
	E0130 20:34:22.995572   45226 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.52:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-877742" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-473743 -n no-preload-473743
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-473743 -n no-preload-473743: exit status 3 (3.169422708s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 20:33:16.531678   44776 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.220:22: connect: no route to host
	E0130 20:33:16.531701   44776 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.220:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-473743 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-473743 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.150327747s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.220:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-473743 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-473743 -n no-preload-473743
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-473743 -n no-preload-473743: exit status 3 (3.063697986s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 20:33:25.747876   44875 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.220:22: connect: no route to host
	E0130 20:33:25.747893   44875 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.220:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-473743" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208583 -n embed-certs-208583
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208583 -n embed-certs-208583: exit status 3 (3.168042099s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 20:33:24.467555   44834 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.63:22: connect: no route to host
	E0130 20:33:24.467572   44834 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.63:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-208583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-208583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153037163s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.63:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-208583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208583 -n embed-certs-208583
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208583 -n embed-certs-208583: exit status 3 (3.063449568s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 20:33:33.683646   44996 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.63:22: connect: no route to host
	E0130 20:33:33.683664   44996 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.63:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-208583" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (138.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-150971 --alsologtostderr -v=3
E0130 20:33:39.711091   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-150971 --alsologtostderr -v=3: exit status 82 (2m0.274313793s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-150971"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 20:33:38.347705   45133 out.go:296] Setting OutFile to fd 1 ...
	I0130 20:33:38.347982   45133 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:33:38.347992   45133 out.go:309] Setting ErrFile to fd 2...
	I0130 20:33:38.347996   45133 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:33:38.348191   45133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 20:33:38.348402   45133 out.go:303] Setting JSON to false
	I0130 20:33:38.348480   45133 mustload.go:65] Loading cluster: old-k8s-version-150971
	I0130 20:33:38.348805   45133 config.go:182] Loaded profile config "old-k8s-version-150971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 20:33:38.348867   45133 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/config.json ...
	I0130 20:33:38.349027   45133 mustload.go:65] Loading cluster: old-k8s-version-150971
	I0130 20:33:38.349134   45133 config.go:182] Loaded profile config "old-k8s-version-150971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 20:33:38.349159   45133 stop.go:39] StopHost: old-k8s-version-150971
	I0130 20:33:38.349531   45133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:33:38.349575   45133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:33:38.363657   45133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43823
	I0130 20:33:38.364072   45133 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:33:38.364687   45133 main.go:141] libmachine: Using API Version  1
	I0130 20:33:38.364718   45133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:33:38.365039   45133 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:33:38.367894   45133 out.go:177] * Stopping node "old-k8s-version-150971"  ...
	I0130 20:33:38.369334   45133 main.go:141] libmachine: Stopping "old-k8s-version-150971"...
	I0130 20:33:38.369352   45133 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:33:38.371043   45133 main.go:141] libmachine: (old-k8s-version-150971) Calling .Stop
	I0130 20:33:38.374591   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 0/120
	I0130 20:33:39.376006   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 1/120
	I0130 20:33:40.377281   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 2/120
	I0130 20:33:41.378572   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 3/120
	I0130 20:33:42.379948   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 4/120
	I0130 20:33:43.381326   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 5/120
	I0130 20:33:44.382776   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 6/120
	I0130 20:33:45.384167   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 7/120
	I0130 20:33:46.385519   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 8/120
	I0130 20:33:47.386674   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 9/120
	I0130 20:33:48.388622   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 10/120
	I0130 20:33:49.389963   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 11/120
	I0130 20:33:50.391213   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 12/120
	I0130 20:33:51.392458   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 13/120
	I0130 20:33:52.393694   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 14/120
	I0130 20:33:53.395602   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 15/120
	I0130 20:33:54.396867   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 16/120
	I0130 20:33:55.398134   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 17/120
	I0130 20:33:56.399214   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 18/120
	I0130 20:33:57.400513   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 19/120
	I0130 20:33:58.402679   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 20/120
	I0130 20:33:59.403911   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 21/120
	I0130 20:34:00.405226   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 22/120
	I0130 20:34:01.406482   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 23/120
	I0130 20:34:02.407674   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 24/120
	I0130 20:34:03.409510   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 25/120
	I0130 20:34:04.410768   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 26/120
	I0130 20:34:05.412315   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 27/120
	I0130 20:34:06.413821   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 28/120
	I0130 20:34:07.414986   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 29/120
	I0130 20:34:08.416432   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 30/120
	I0130 20:34:09.417699   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 31/120
	I0130 20:34:10.419074   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 32/120
	I0130 20:34:11.420487   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 33/120
	I0130 20:34:12.421741   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 34/120
	I0130 20:34:13.423799   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 35/120
	I0130 20:34:14.425149   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 36/120
	I0130 20:34:15.426552   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 37/120
	I0130 20:34:16.427981   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 38/120
	I0130 20:34:17.429667   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 39/120
	I0130 20:34:18.431754   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 40/120
	I0130 20:34:19.433001   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 41/120
	I0130 20:34:20.434333   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 42/120
	I0130 20:34:21.435724   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 43/120
	I0130 20:34:22.437528   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 44/120
	I0130 20:34:23.439328   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 45/120
	I0130 20:34:24.440650   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 46/120
	I0130 20:34:25.441902   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 47/120
	I0130 20:34:26.443346   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 48/120
	I0130 20:34:27.444541   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 49/120
	I0130 20:34:28.446495   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 50/120
	I0130 20:34:29.447743   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 51/120
	I0130 20:34:30.448990   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 52/120
	I0130 20:34:31.450308   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 53/120
	I0130 20:34:32.451357   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 54/120
	I0130 20:34:33.453252   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 55/120
	I0130 20:34:34.454708   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 56/120
	I0130 20:34:35.456080   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 57/120
	I0130 20:34:36.457470   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 58/120
	I0130 20:34:37.458782   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 59/120
	I0130 20:34:38.461077   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 60/120
	I0130 20:34:39.462386   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 61/120
	I0130 20:34:40.463880   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 62/120
	I0130 20:34:41.465261   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 63/120
	I0130 20:34:42.466567   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 64/120
	I0130 20:34:43.468316   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 65/120
	I0130 20:34:44.469633   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 66/120
	I0130 20:34:45.470904   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 67/120
	I0130 20:34:46.472289   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 68/120
	I0130 20:34:47.473531   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 69/120
	I0130 20:34:48.475570   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 70/120
	I0130 20:34:49.477724   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 71/120
	I0130 20:34:50.479091   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 72/120
	I0130 20:34:51.480672   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 73/120
	I0130 20:34:52.482085   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 74/120
	I0130 20:34:53.484016   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 75/120
	I0130 20:34:54.485420   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 76/120
	I0130 20:34:55.486881   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 77/120
	I0130 20:34:56.488307   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 78/120
	I0130 20:34:57.489620   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 79/120
	I0130 20:34:58.491863   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 80/120
	I0130 20:34:59.493350   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 81/120
	I0130 20:35:00.494779   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 82/120
	I0130 20:35:01.496171   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 83/120
	I0130 20:35:02.497680   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 84/120
	I0130 20:35:03.499556   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 85/120
	I0130 20:35:04.500878   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 86/120
	I0130 20:35:05.502161   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 87/120
	I0130 20:35:06.503874   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 88/120
	I0130 20:35:07.505151   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 89/120
	I0130 20:35:08.507250   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 90/120
	I0130 20:35:09.509507   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 91/120
	I0130 20:35:10.510958   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 92/120
	I0130 20:35:11.512278   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 93/120
	I0130 20:35:12.513692   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 94/120
	I0130 20:35:13.515688   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 95/120
	I0130 20:35:14.517148   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 96/120
	I0130 20:35:15.518452   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 97/120
	I0130 20:35:16.519875   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 98/120
	I0130 20:35:17.521644   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 99/120
	I0130 20:35:18.523738   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 100/120
	I0130 20:35:19.525033   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 101/120
	I0130 20:35:20.526314   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 102/120
	I0130 20:35:21.527517   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 103/120
	I0130 20:35:22.528775   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 104/120
	I0130 20:35:23.530401   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 105/120
	I0130 20:35:24.531903   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 106/120
	I0130 20:35:25.533213   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 107/120
	I0130 20:35:26.534606   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 108/120
	I0130 20:35:27.535750   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 109/120
	I0130 20:35:28.537665   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 110/120
	I0130 20:35:29.539618   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 111/120
	I0130 20:35:30.541091   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 112/120
	I0130 20:35:31.542419   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 113/120
	I0130 20:35:32.543784   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 114/120
	I0130 20:35:33.546014   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 115/120
	I0130 20:35:34.547327   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 116/120
	I0130 20:35:35.548695   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 117/120
	I0130 20:35:36.549998   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 118/120
	I0130 20:35:37.551124   45133 main.go:141] libmachine: (old-k8s-version-150971) Waiting for machine to stop 119/120
	I0130 20:35:38.552176   45133 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0130 20:35:38.552221   45133 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0130 20:35:38.554383   45133 out.go:177] 
	W0130 20:35:38.555868   45133 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0130 20:35:38.555884   45133 out.go:239] * 
	* 
	W0130 20:35:38.558270   45133 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 20:35:38.559820   45133 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-150971 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-150971 -n old-k8s-version-150971
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-150971 -n old-k8s-version-150971: exit status 3 (18.642721755s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 20:35:57.203592   45644 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.16:22: connect: no route to host
	E0130 20:35:57.203617   45644 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.16:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-150971" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (138.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-877742 -n default-k8s-diff-port-877742
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-877742 -n default-k8s-diff-port-877742: exit status 3 (3.167860808s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 20:34:26.163608   45341 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.52:22: connect: no route to host
	E0130 20:34:26.163627   45341 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.52:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-877742 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-877742 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152494749s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.52:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-877742 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-877742 -n default-k8s-diff-port-877742
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-877742 -n default-k8s-diff-port-877742: exit status 3 (3.063480984s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 20:34:35.379699   45401 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.52:22: connect: no route to host
	E0130 20:34:35.379718   45401 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.52:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-877742" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-150971 -n old-k8s-version-150971
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-150971 -n old-k8s-version-150971: exit status 3 (3.167856269s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 20:36:00.371631   45708 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.16:22: connect: no route to host
	E0130 20:36:00.371650   45708 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.16:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-150971 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-150971 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153540532s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.16:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-150971 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-150971 -n old-k8s-version-150971
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-150971 -n old-k8s-version-150971: exit status 3 (3.061697885s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 20:36:09.587631   45778 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.16:22: connect: no route to host
	E0130 20:36:09.587654   45778 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.16:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-150971" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0130 20:43:39.710363   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-208583 -n embed-certs-208583
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-30 20:52:16.700542683 +0000 UTC m=+5373.777512457
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208583 -n embed-certs-208583
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-208583 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-208583 logs -n 25: (1.684262861s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:28 UTC | 30 Jan 24 20:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:28 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| pause   | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-757744 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | disable-driver-mounts-757744                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:31 UTC |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-473743             | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-473743                                   | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-208583            | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:31 UTC | 30 Jan 24 20:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:31 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-877742  | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:32 UTC | 30 Jan 24 20:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:32 UTC |                     |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-473743                  | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-208583                 | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-473743                                   | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:44 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-150971        | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-877742       | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:34 UTC | 30 Jan 24 20:48 UTC |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-150971             | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:36 UTC | 30 Jan 24 20:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 20:36:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 20:36:09.643751   45819 out.go:296] Setting OutFile to fd 1 ...
	I0130 20:36:09.644027   45819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:36:09.644038   45819 out.go:309] Setting ErrFile to fd 2...
	I0130 20:36:09.644045   45819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:36:09.644230   45819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 20:36:09.644766   45819 out.go:303] Setting JSON to false
	I0130 20:36:09.645668   45819 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4715,"bootTime":1706642255,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 20:36:09.645727   45819 start.go:138] virtualization: kvm guest
	I0130 20:36:09.648102   45819 out.go:177] * [old-k8s-version-150971] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 20:36:09.649772   45819 out.go:177]   - MINIKUBE_LOCATION=18007
	I0130 20:36:09.651000   45819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 20:36:09.649826   45819 notify.go:220] Checking for updates...
	I0130 20:36:09.653462   45819 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:36:09.654761   45819 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 20:36:09.655939   45819 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 20:36:09.657140   45819 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 20:36:09.658638   45819 config.go:182] Loaded profile config "old-k8s-version-150971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 20:36:09.659027   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:36:09.659066   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:36:09.672985   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39323
	I0130 20:36:09.673381   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:36:09.673876   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:36:09.673897   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:36:09.674191   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:36:09.674351   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:36:09.676038   45819 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0130 20:36:09.677315   45819 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 20:36:09.677582   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:36:09.677630   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:36:09.691259   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0130 20:36:09.691604   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:36:09.692060   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:36:09.692089   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:36:09.692371   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:36:09.692555   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:36:09.726172   45819 out.go:177] * Using the kvm2 driver based on existing profile
	I0130 20:36:09.727421   45819 start.go:298] selected driver: kvm2
	I0130 20:36:09.727433   45819 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-150971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:36:09.727546   45819 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 20:36:09.728186   45819 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 20:36:09.728255   45819 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18007-4458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 20:36:09.742395   45819 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 20:36:09.742715   45819 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 20:36:09.742771   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:36:09.742784   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:36:09.742794   45819 start_flags.go:321] config:
	{Name:old-k8s-version-150971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:36:09.742977   45819 iso.go:125] acquiring lock: {Name:mk072ab123730f3058e85a91672f85e887bd47af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 20:36:09.745577   45819 out.go:177] * Starting control plane node old-k8s-version-150971 in cluster old-k8s-version-150971
	I0130 20:36:10.483495   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:09.746820   45819 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 20:36:09.746852   45819 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0130 20:36:09.746865   45819 cache.go:56] Caching tarball of preloaded images
	I0130 20:36:09.746951   45819 preload.go:174] Found /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 20:36:09.746960   45819 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0130 20:36:09.747061   45819 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/config.json ...
	I0130 20:36:09.747229   45819 start.go:365] acquiring machines lock for old-k8s-version-150971: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 20:36:13.555547   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:19.635533   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:22.707498   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:28.787473   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:31.859544   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:37.939524   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:41.011456   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:47.091510   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:50.163505   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:56.243497   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:59.315474   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:05.395536   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:08.467514   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:14.547517   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:17.619561   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:23.699509   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:26.771568   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:32.851483   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:35.923502   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:42.003515   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:45.075526   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:51.155512   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:54.227514   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:38:00.307532   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:38:03.311451   45037 start.go:369] acquired machines lock for "embed-certs-208583" in 4m29.471089592s
	I0130 20:38:03.311507   45037 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:38:03.311514   45037 fix.go:54] fixHost starting: 
	I0130 20:38:03.311893   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:03.311933   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:03.326477   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0130 20:38:03.326949   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:03.327373   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:03.327403   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:03.327758   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:03.327946   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:03.328115   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:03.329604   45037 fix.go:102] recreateIfNeeded on embed-certs-208583: state=Stopped err=<nil>
	I0130 20:38:03.329646   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	W0130 20:38:03.329810   45037 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:38:03.331493   45037 out.go:177] * Restarting existing kvm2 VM for "embed-certs-208583" ...
	I0130 20:38:03.332735   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Start
	I0130 20:38:03.332862   45037 main.go:141] libmachine: (embed-certs-208583) Ensuring networks are active...
	I0130 20:38:03.333514   45037 main.go:141] libmachine: (embed-certs-208583) Ensuring network default is active
	I0130 20:38:03.333859   45037 main.go:141] libmachine: (embed-certs-208583) Ensuring network mk-embed-certs-208583 is active
	I0130 20:38:03.334154   45037 main.go:141] libmachine: (embed-certs-208583) Getting domain xml...
	I0130 20:38:03.334860   45037 main.go:141] libmachine: (embed-certs-208583) Creating domain...
	I0130 20:38:03.309254   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:38:03.309293   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:38:03.311318   44923 machine.go:91] provisioned docker machine in 4m37.382925036s
	I0130 20:38:03.311359   44923 fix.go:56] fixHost completed within 4m37.403399512s
	I0130 20:38:03.311364   44923 start.go:83] releasing machines lock for "no-preload-473743", held for 4m37.403435936s
	W0130 20:38:03.311387   44923 start.go:694] error starting host: provision: host is not running
	W0130 20:38:03.311504   44923 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0130 20:38:03.311518   44923 start.go:709] Will try again in 5 seconds ...
	I0130 20:38:04.507963   45037 main.go:141] libmachine: (embed-certs-208583) Waiting to get IP...
	I0130 20:38:04.508755   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:04.509133   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:04.509207   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:04.509115   46132 retry.go:31] will retry after 189.527185ms: waiting for machine to come up
	I0130 20:38:04.700560   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:04.701193   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:04.701223   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:04.701137   46132 retry.go:31] will retry after 239.29825ms: waiting for machine to come up
	I0130 20:38:04.941612   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:04.942080   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:04.942116   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:04.942040   46132 retry.go:31] will retry after 388.672579ms: waiting for machine to come up
	I0130 20:38:05.332617   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:05.333018   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:05.333041   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:05.332968   46132 retry.go:31] will retry after 525.5543ms: waiting for machine to come up
	I0130 20:38:05.859677   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:05.860094   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:05.860126   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:05.860055   46132 retry.go:31] will retry after 595.87535ms: waiting for machine to come up
	I0130 20:38:06.457828   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:06.458220   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:06.458244   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:06.458197   46132 retry.go:31] will retry after 766.148522ms: waiting for machine to come up
	I0130 20:38:07.226151   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:07.226615   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:07.226652   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:07.226558   46132 retry.go:31] will retry after 843.449223ms: waiting for machine to come up
	I0130 20:38:08.070983   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:08.071381   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:08.071407   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:08.071338   46132 retry.go:31] will retry after 1.079839146s: waiting for machine to come up
	I0130 20:38:08.313897   44923 start.go:365] acquiring machines lock for no-preload-473743: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 20:38:09.152768   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:09.153087   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:09.153113   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:09.153034   46132 retry.go:31] will retry after 1.855245571s: waiting for machine to come up
	I0130 20:38:11.010893   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:11.011260   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:11.011299   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:11.011196   46132 retry.go:31] will retry after 2.159062372s: waiting for machine to come up
	I0130 20:38:13.172734   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:13.173144   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:13.173173   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:13.173106   46132 retry.go:31] will retry after 2.73165804s: waiting for machine to come up
	I0130 20:38:15.908382   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:15.908803   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:15.908834   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:15.908732   46132 retry.go:31] will retry after 3.268718285s: waiting for machine to come up
	I0130 20:38:23.603972   45441 start.go:369] acquired machines lock for "default-k8s-diff-port-877742" in 3m48.064811183s
	I0130 20:38:23.604051   45441 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:38:23.604061   45441 fix.go:54] fixHost starting: 
	I0130 20:38:23.604420   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:23.604456   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:23.620189   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34493
	I0130 20:38:23.620538   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:23.621035   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:38:23.621073   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:23.621415   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:23.621584   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:23.621739   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:38:23.623158   45441 fix.go:102] recreateIfNeeded on default-k8s-diff-port-877742: state=Stopped err=<nil>
	I0130 20:38:23.623185   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	W0130 20:38:23.623382   45441 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:38:23.625974   45441 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-877742" ...
	I0130 20:38:19.178930   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:19.179358   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:19.179389   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:19.179300   46132 retry.go:31] will retry after 3.117969425s: waiting for machine to come up
	I0130 20:38:22.300539   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.300957   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has current primary IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.300982   45037 main.go:141] libmachine: (embed-certs-208583) Found IP for machine: 192.168.61.63
	I0130 20:38:22.300997   45037 main.go:141] libmachine: (embed-certs-208583) Reserving static IP address...
	I0130 20:38:22.301371   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "embed-certs-208583", mac: "52:54:00:43:f2:e1", ip: "192.168.61.63"} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.301395   45037 main.go:141] libmachine: (embed-certs-208583) Reserved static IP address: 192.168.61.63
	I0130 20:38:22.301409   45037 main.go:141] libmachine: (embed-certs-208583) DBG | skip adding static IP to network mk-embed-certs-208583 - found existing host DHCP lease matching {name: "embed-certs-208583", mac: "52:54:00:43:f2:e1", ip: "192.168.61.63"}
	I0130 20:38:22.301420   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Getting to WaitForSSH function...
	I0130 20:38:22.301436   45037 main.go:141] libmachine: (embed-certs-208583) Waiting for SSH to be available...
	I0130 20:38:22.303472   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.303820   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.303842   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.303968   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Using SSH client type: external
	I0130 20:38:22.304011   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa (-rw-------)
	I0130 20:38:22.304042   45037 main.go:141] libmachine: (embed-certs-208583) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:38:22.304052   45037 main.go:141] libmachine: (embed-certs-208583) DBG | About to run SSH command:
	I0130 20:38:22.304065   45037 main.go:141] libmachine: (embed-certs-208583) DBG | exit 0
	I0130 20:38:22.398610   45037 main.go:141] libmachine: (embed-certs-208583) DBG | SSH cmd err, output: <nil>: 
	I0130 20:38:22.398945   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetConfigRaw
	I0130 20:38:22.399605   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:22.402157   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.402531   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.402569   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.402759   45037 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/config.json ...
	I0130 20:38:22.402974   45037 machine.go:88] provisioning docker machine ...
	I0130 20:38:22.402999   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:22.403238   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetMachineName
	I0130 20:38:22.403440   45037 buildroot.go:166] provisioning hostname "embed-certs-208583"
	I0130 20:38:22.403462   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetMachineName
	I0130 20:38:22.403642   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.405694   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.406026   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.406055   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.406180   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:22.406429   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.406599   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.406734   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:22.406904   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:22.407422   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:22.407446   45037 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208583 && echo "embed-certs-208583" | sudo tee /etc/hostname
	I0130 20:38:22.548206   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208583
	
	I0130 20:38:22.548240   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.550933   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.551316   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.551345   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.551492   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:22.551690   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.551821   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.551934   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:22.552129   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:22.552425   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:22.552441   45037 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208583' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208583/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208583' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:38:22.687464   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:38:22.687491   45037 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:38:22.687536   45037 buildroot.go:174] setting up certificates
	I0130 20:38:22.687551   45037 provision.go:83] configureAuth start
	I0130 20:38:22.687562   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetMachineName
	I0130 20:38:22.687813   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:22.690307   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.690664   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.690686   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.690855   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.693139   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.693426   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.693462   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.693597   45037 provision.go:138] copyHostCerts
	I0130 20:38:22.693667   45037 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:38:22.693686   45037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:38:22.693766   45037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:38:22.693866   45037 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:38:22.693876   45037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:38:22.693912   45037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:38:22.693986   45037 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:38:22.693997   45037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:38:22.694036   45037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:38:22.694122   45037 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208583 san=[192.168.61.63 192.168.61.63 localhost 127.0.0.1 minikube embed-certs-208583]
	I0130 20:38:22.862847   45037 provision.go:172] copyRemoteCerts
	I0130 20:38:22.862902   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:38:22.862921   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.865533   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.865812   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.865842   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.866006   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:22.866200   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.866315   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:22.866496   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:22.959746   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:38:22.982164   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 20:38:23.004087   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 20:38:23.025875   45037 provision.go:86] duration metric: configureAuth took 338.306374ms
	I0130 20:38:23.025896   45037 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:38:23.026090   45037 config.go:182] Loaded profile config "embed-certs-208583": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:38:23.026173   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.028688   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.028913   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.028946   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.029125   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.029277   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.029430   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.029550   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.029679   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:23.029980   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:23.029995   45037 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:38:23.337986   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:38:23.338008   45037 machine.go:91] provisioned docker machine in 935.018208ms
	I0130 20:38:23.338016   45037 start.go:300] post-start starting for "embed-certs-208583" (driver="kvm2")
	I0130 20:38:23.338026   45037 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:38:23.338051   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.338301   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:38:23.338327   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.341005   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.341398   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.341429   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.341516   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.341686   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.341825   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.341997   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:23.437500   45037 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:38:23.441705   45037 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:38:23.441724   45037 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:38:23.441784   45037 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:38:23.441851   45037 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:38:23.441937   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:38:23.450700   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:23.471898   45037 start.go:303] post-start completed in 133.870929ms
	I0130 20:38:23.471916   45037 fix.go:56] fixHost completed within 20.160401625s
	I0130 20:38:23.471940   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.474341   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.474659   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.474695   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.474793   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.474984   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.475181   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.475341   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.475515   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:23.475878   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:23.475891   45037 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:38:23.603819   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647103.552984334
	
	I0130 20:38:23.603841   45037 fix.go:206] guest clock: 1706647103.552984334
	I0130 20:38:23.603848   45037 fix.go:219] Guest: 2024-01-30 20:38:23.552984334 +0000 UTC Remote: 2024-01-30 20:38:23.471920461 +0000 UTC m=+289.780929635 (delta=81.063873ms)
	I0130 20:38:23.603879   45037 fix.go:190] guest clock delta is within tolerance: 81.063873ms
	I0130 20:38:23.603885   45037 start.go:83] releasing machines lock for "embed-certs-208583", held for 20.292396099s
	I0130 20:38:23.603916   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.604168   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:23.606681   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.607027   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.607060   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.607190   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.607703   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.607876   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.607947   45037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:38:23.607999   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.608115   45037 ssh_runner.go:195] Run: cat /version.json
	I0130 20:38:23.608140   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.610693   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611052   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.611078   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611154   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611199   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.611380   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.611530   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.611585   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.611625   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611666   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:23.611790   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.611935   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.612081   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.612197   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:23.725868   45037 ssh_runner.go:195] Run: systemctl --version
	I0130 20:38:23.731516   45037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:38:23.872093   45037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:38:23.878418   45037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:38:23.878493   45037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:38:23.892910   45037 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:38:23.892934   45037 start.go:475] detecting cgroup driver to use...
	I0130 20:38:23.893007   45037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:38:23.905950   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:38:23.917437   45037 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:38:23.917484   45037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:38:23.929241   45037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:38:23.940979   45037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:38:24.045106   45037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:38:24.160413   45037 docker.go:233] disabling docker service ...
	I0130 20:38:24.160486   45037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:38:24.173684   45037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:38:24.185484   45037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:38:24.308292   45037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:38:24.430021   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:38:24.442910   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:38:24.460145   45037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:38:24.460211   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.469163   45037 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:38:24.469225   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.478396   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.487374   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.496306   45037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:38:24.505283   45037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:38:24.512919   45037 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:38:24.512974   45037 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:38:24.523939   45037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:38:24.533002   45037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:38:24.665917   45037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:38:24.839797   45037 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:38:24.839866   45037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:38:24.851397   45037 start.go:543] Will wait 60s for crictl version
	I0130 20:38:24.851454   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:38:24.855227   45037 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:38:24.888083   45037 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:38:24.888163   45037 ssh_runner.go:195] Run: crio --version
	I0130 20:38:24.934626   45037 ssh_runner.go:195] Run: crio --version
	I0130 20:38:24.984233   45037 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 20:38:23.627365   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Start
	I0130 20:38:23.627532   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Ensuring networks are active...
	I0130 20:38:23.628247   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Ensuring network default is active
	I0130 20:38:23.628650   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Ensuring network mk-default-k8s-diff-port-877742 is active
	I0130 20:38:23.629109   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Getting domain xml...
	I0130 20:38:23.629715   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Creating domain...
	I0130 20:38:24.849156   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting to get IP...
	I0130 20:38:24.850261   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:24.850701   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:24.850729   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:24.850645   46249 retry.go:31] will retry after 259.328149ms: waiting for machine to come up
	I0130 20:38:25.112451   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.112941   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.112971   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:25.112905   46249 retry.go:31] will retry after 283.994822ms: waiting for machine to come up
	I0130 20:38:25.398452   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.398937   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.398968   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:25.398904   46249 retry.go:31] will retry after 348.958329ms: waiting for machine to come up
	I0130 20:38:24.985681   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:24.988666   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:24.989016   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:24.989042   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:24.989288   45037 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0130 20:38:24.993626   45037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:25.005749   45037 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 20:38:25.005817   45037 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:25.047605   45037 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 20:38:25.047674   45037 ssh_runner.go:195] Run: which lz4
	I0130 20:38:25.051662   45037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 20:38:25.055817   45037 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:38:25.055849   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 20:38:26.895244   45037 crio.go:444] Took 1.843605 seconds to copy over tarball
	I0130 20:38:26.895332   45037 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 20:38:25.749560   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.750020   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.750048   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:25.749985   46249 retry.go:31] will retry after 597.656366ms: waiting for machine to come up
	I0130 20:38:26.349518   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.349957   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.350004   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:26.349929   46249 retry.go:31] will retry after 600.926171ms: waiting for machine to come up
	I0130 20:38:26.952713   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.953319   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.953343   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:26.953276   46249 retry.go:31] will retry after 654.976543ms: waiting for machine to come up
	I0130 20:38:27.610017   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:27.610464   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:27.610494   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:27.610413   46249 retry.go:31] will retry after 881.075627ms: waiting for machine to come up
	I0130 20:38:28.493641   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:28.494188   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:28.494218   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:28.494136   46249 retry.go:31] will retry after 1.436302447s: waiting for machine to come up
	I0130 20:38:29.932271   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:29.932794   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:29.932825   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:29.932729   46249 retry.go:31] will retry after 1.394659615s: waiting for machine to come up
	I0130 20:38:29.834721   45037 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.939351369s)
	I0130 20:38:29.834746   45037 crio.go:451] Took 2.939470 seconds to extract the tarball
	I0130 20:38:29.834754   45037 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 20:38:29.875618   45037 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:29.921569   45037 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 20:38:29.921593   45037 cache_images.go:84] Images are preloaded, skipping loading
	I0130 20:38:29.921661   45037 ssh_runner.go:195] Run: crio config
	I0130 20:38:29.981565   45037 cni.go:84] Creating CNI manager for ""
	I0130 20:38:29.981590   45037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:38:29.981612   45037 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:38:29.981637   45037 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.63 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-208583 NodeName:embed-certs-208583 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:38:29.981824   45037 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-208583"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:38:29.981919   45037 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-208583 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-208583 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:38:29.981984   45037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 20:38:29.991601   45037 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:38:29.991665   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:38:30.000815   45037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0130 20:38:30.016616   45037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:38:30.032999   45037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0130 20:38:30.052735   45037 ssh_runner.go:195] Run: grep 192.168.61.63	control-plane.minikube.internal$ /etc/hosts
	I0130 20:38:30.057008   45037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:30.069968   45037 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583 for IP: 192.168.61.63
	I0130 20:38:30.070004   45037 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:30.070164   45037 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:38:30.070201   45037 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:38:30.070263   45037 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/client.key
	I0130 20:38:30.070323   45037 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/apiserver.key.9879da99
	I0130 20:38:30.070370   45037 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/proxy-client.key
	I0130 20:38:30.070496   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:38:30.070531   45037 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:38:30.070541   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:38:30.070561   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:38:30.070586   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:38:30.070612   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:38:30.070659   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:30.071211   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:38:30.098665   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 20:38:30.125013   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:38:30.150013   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 20:38:30.177206   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:38:30.202683   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:38:30.225774   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:38:30.249090   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:38:30.274681   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:38:30.302316   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:38:30.326602   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:38:30.351136   45037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:38:30.368709   45037 ssh_runner.go:195] Run: openssl version
	I0130 20:38:30.374606   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:38:30.386421   45037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:38:30.391240   45037 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:38:30.391314   45037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:38:30.397082   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:38:30.409040   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:38:30.420910   45037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:30.425929   45037 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:30.425971   45037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:30.431609   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:38:30.443527   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:38:30.455200   45037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:38:30.460242   45037 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:38:30.460307   45037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:38:30.466225   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:38:30.479406   45037 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:38:30.485331   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:38:30.493468   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:38:30.499465   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:38:30.505394   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:38:30.511152   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:38:30.516951   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:38:30.522596   45037 kubeadm.go:404] StartCluster: {Name:embed-certs-208583 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-208583 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.63 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:38:30.522698   45037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:38:30.522747   45037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:38:30.559669   45037 cri.go:89] found id: ""
	I0130 20:38:30.559740   45037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:38:30.571465   45037 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:38:30.571487   45037 kubeadm.go:636] restartCluster start
	I0130 20:38:30.571539   45037 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:38:30.581398   45037 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:30.582366   45037 kubeconfig.go:92] found "embed-certs-208583" server: "https://192.168.61.63:8443"
	I0130 20:38:30.584719   45037 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:38:30.593986   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:30.594031   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:30.606926   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:31.094476   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:31.094545   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:31.106991   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:31.594553   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:31.594633   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:31.607554   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:32.094029   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:32.094114   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:32.107447   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:32.594998   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:32.595079   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:32.607929   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:33.094468   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:33.094562   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:33.111525   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:33.594502   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:33.594578   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:33.611216   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:31.329366   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:31.329720   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:31.329739   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:31.329672   46249 retry.go:31] will retry after 1.8606556s: waiting for machine to come up
	I0130 20:38:33.192538   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:33.192916   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:33.192938   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:33.192873   46249 retry.go:31] will retry after 2.294307307s: waiting for machine to come up
	I0130 20:38:34.094151   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:34.094223   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:34.106531   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:34.594098   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:34.594172   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:34.606286   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:35.094891   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:35.094995   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:35.106949   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:35.594452   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:35.594532   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:35.611066   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:36.094606   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:36.094684   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:36.110348   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:36.595021   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:36.595084   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:36.609884   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:37.094347   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:37.094445   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:37.106709   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:37.594248   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:37.594348   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:37.610367   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:38.095063   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:38.095141   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:38.107195   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:38.594024   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:38.594139   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:38.606041   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:35.489701   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:35.490129   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:35.490166   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:35.490071   46249 retry.go:31] will retry after 2.434575636s: waiting for machine to come up
	I0130 20:38:37.927709   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:37.928168   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:37.928198   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:37.928111   46249 retry.go:31] will retry after 3.073200884s: waiting for machine to come up
	I0130 20:38:39.094490   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:39.094572   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:39.106154   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:39.594866   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:39.594961   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:39.606937   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:40.094464   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:40.094549   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:40.106068   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:40.594556   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:40.594637   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:40.606499   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:40.606523   45037 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:38:40.606544   45037 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:38:40.606554   45037 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:38:40.606605   45037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:38:40.646444   45037 cri.go:89] found id: ""
	I0130 20:38:40.646505   45037 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:38:40.661886   45037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:38:40.670948   45037 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:38:40.671008   45037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:38:40.679749   45037 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:38:40.679771   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:40.780597   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:41.804175   45037 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.023537725s)
	I0130 20:38:41.804214   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:41.999624   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:42.103064   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:42.173522   45037 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:38:42.173628   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:42.674417   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:43.173996   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:43.674137   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:41.004686   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:41.005140   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:41.005165   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:41.005085   46249 retry.go:31] will retry after 3.766414086s: waiting for machine to come up
	I0130 20:38:44.773568   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.774049   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Found IP for machine: 192.168.72.52
	I0130 20:38:44.774082   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has current primary IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.774099   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Reserving static IP address...
	I0130 20:38:44.774494   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-877742", mac: "52:54:00:c4:e0:0b", ip: "192.168.72.52"} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.774517   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Reserved static IP address: 192.168.72.52
	I0130 20:38:44.774543   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | skip adding static IP to network mk-default-k8s-diff-port-877742 - found existing host DHCP lease matching {name: "default-k8s-diff-port-877742", mac: "52:54:00:c4:e0:0b", ip: "192.168.72.52"}
	I0130 20:38:44.774561   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for SSH to be available...
	I0130 20:38:44.774589   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Getting to WaitForSSH function...
	I0130 20:38:44.776761   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.777079   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.777114   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.777210   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Using SSH client type: external
	I0130 20:38:44.777242   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa (-rw-------)
	I0130 20:38:44.777299   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:38:44.777332   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | About to run SSH command:
	I0130 20:38:44.777352   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | exit 0
	I0130 20:38:44.875219   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | SSH cmd err, output: <nil>: 
	I0130 20:38:44.875515   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetConfigRaw
	I0130 20:38:44.876243   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:44.878633   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.879035   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.879069   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.879336   45441 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/config.json ...
	I0130 20:38:44.879504   45441 machine.go:88] provisioning docker machine ...
	I0130 20:38:44.879522   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:44.879734   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetMachineName
	I0130 20:38:44.879889   45441 buildroot.go:166] provisioning hostname "default-k8s-diff-port-877742"
	I0130 20:38:44.879932   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetMachineName
	I0130 20:38:44.880102   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:44.882426   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.882753   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.882777   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.882927   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:44.883099   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:44.883246   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:44.883409   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:44.883569   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:44.884066   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:44.884092   45441 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-877742 && echo "default-k8s-diff-port-877742" | sudo tee /etc/hostname
	I0130 20:38:45.030801   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-877742
	
	I0130 20:38:45.030847   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.033532   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.033897   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.033955   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.034094   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.034309   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.034489   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.034644   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.034826   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:45.035168   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:45.035187   45441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-877742' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-877742/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-877742' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:38:45.175807   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:38:45.175849   45441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:38:45.175884   45441 buildroot.go:174] setting up certificates
	I0130 20:38:45.175907   45441 provision.go:83] configureAuth start
	I0130 20:38:45.175923   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetMachineName
	I0130 20:38:45.176200   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:45.179102   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.179489   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.179526   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.179664   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.182178   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.182532   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.182560   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.182666   45441 provision.go:138] copyHostCerts
	I0130 20:38:45.182716   45441 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:38:45.182728   45441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:38:45.182788   45441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:38:45.182895   45441 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:38:45.182910   45441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:38:45.182973   45441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:38:45.183054   45441 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:38:45.183065   45441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:38:45.183090   45441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:38:45.183158   45441 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-877742 san=[192.168.72.52 192.168.72.52 localhost 127.0.0.1 minikube default-k8s-diff-port-877742]
	I0130 20:38:45.352895   45441 provision.go:172] copyRemoteCerts
	I0130 20:38:45.352960   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:38:45.352986   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.355820   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.356141   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.356169   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.356343   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.356540   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.356717   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.356868   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:46.136084   45819 start.go:369] acquired machines lock for "old-k8s-version-150971" in 2m36.388823473s
	I0130 20:38:46.136157   45819 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:38:46.136169   45819 fix.go:54] fixHost starting: 
	I0130 20:38:46.136624   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:46.136669   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:46.153210   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33685
	I0130 20:38:46.153604   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:46.154080   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:38:46.154104   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:46.154422   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:46.154630   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:38:46.154771   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:38:46.156388   45819 fix.go:102] recreateIfNeeded on old-k8s-version-150971: state=Stopped err=<nil>
	I0130 20:38:46.156420   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	W0130 20:38:46.156613   45819 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:38:46.158388   45819 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-150971" ...
	I0130 20:38:45.456511   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:38:45.483324   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0130 20:38:45.510567   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 20:38:45.535387   45441 provision.go:86] duration metric: configureAuth took 359.467243ms
	I0130 20:38:45.535421   45441 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:38:45.535659   45441 config.go:182] Loaded profile config "default-k8s-diff-port-877742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:38:45.535749   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.538712   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.539176   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.539214   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.539334   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.539574   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.539741   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.539995   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.540244   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:45.540770   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:45.540796   45441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:38:45.877778   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:38:45.877813   45441 machine.go:91] provisioned docker machine in 998.294632ms
	I0130 20:38:45.877825   45441 start.go:300] post-start starting for "default-k8s-diff-port-877742" (driver="kvm2")
	I0130 20:38:45.877845   45441 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:38:45.877869   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:45.878190   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:38:45.878224   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.881167   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.881533   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.881566   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.881704   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.881880   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.882064   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.882207   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:45.972932   45441 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:38:45.977412   45441 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:38:45.977437   45441 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:38:45.977514   45441 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:38:45.977593   45441 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:38:45.977694   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:38:45.985843   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:46.008484   45441 start.go:303] post-start completed in 130.643321ms
	I0130 20:38:46.008509   45441 fix.go:56] fixHost completed within 22.404447995s
	I0130 20:38:46.008533   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:46.011463   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.011901   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.011944   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.012088   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:46.012304   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.012500   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.012647   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:46.012803   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:46.013202   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:46.013226   45441 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:38:46.135930   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647126.077813825
	
	I0130 20:38:46.135955   45441 fix.go:206] guest clock: 1706647126.077813825
	I0130 20:38:46.135965   45441 fix.go:219] Guest: 2024-01-30 20:38:46.077813825 +0000 UTC Remote: 2024-01-30 20:38:46.008513384 +0000 UTC m=+250.621109629 (delta=69.300441ms)
	I0130 20:38:46.135988   45441 fix.go:190] guest clock delta is within tolerance: 69.300441ms
	I0130 20:38:46.135993   45441 start.go:83] releasing machines lock for "default-k8s-diff-port-877742", held for 22.53196506s
	I0130 20:38:46.136021   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.136315   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:46.139211   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.139549   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.139581   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.139695   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.140243   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.140427   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.140507   45441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:38:46.140555   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:46.140639   45441 ssh_runner.go:195] Run: cat /version.json
	I0130 20:38:46.140661   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:46.143348   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.143614   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.143651   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.143675   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.143843   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:46.144027   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.144081   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.144110   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.144228   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:46.144253   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:46.144434   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.144434   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:46.144580   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:46.144707   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:46.241499   45441 ssh_runner.go:195] Run: systemctl --version
	I0130 20:38:46.264180   45441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:38:46.417654   45441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:38:46.423377   45441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:38:46.423450   45441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:38:46.439524   45441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:38:46.439549   45441 start.go:475] detecting cgroup driver to use...
	I0130 20:38:46.439612   45441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:38:46.456668   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:38:46.469494   45441 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:38:46.469547   45441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:38:46.482422   45441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:38:46.496031   45441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:38:46.601598   45441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:38:46.710564   45441 docker.go:233] disabling docker service ...
	I0130 20:38:46.710633   45441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:38:46.724084   45441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:38:46.736019   45441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:38:46.853310   45441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:38:46.976197   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:38:46.991033   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:38:47.009961   45441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:38:47.010028   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.019749   45441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:38:47.019822   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.032215   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.043642   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.056005   45441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:38:47.068954   45441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:38:47.079752   45441 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:38:47.079823   45441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:38:47.096106   45441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:38:47.109074   45441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:38:47.243783   45441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:38:47.468971   45441 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:38:47.469055   45441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:38:47.474571   45441 start.go:543] Will wait 60s for crictl version
	I0130 20:38:47.474646   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:38:47.479007   45441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:38:47.525155   45441 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:38:47.525259   45441 ssh_runner.go:195] Run: crio --version
	I0130 20:38:47.582308   45441 ssh_runner.go:195] Run: crio --version
	I0130 20:38:47.648689   45441 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 20:38:44.173930   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:44.197493   45037 api_server.go:72] duration metric: took 2.023971316s to wait for apiserver process to appear ...
	I0130 20:38:44.197522   45037 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:38:44.197545   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:44.198089   45037 api_server.go:269] stopped: https://192.168.61.63:8443/healthz: Get "https://192.168.61.63:8443/healthz": dial tcp 192.168.61.63:8443: connect: connection refused
	I0130 20:38:44.697622   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:48.683401   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:38:48.683435   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:38:48.683452   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:46.159722   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Start
	I0130 20:38:46.159892   45819 main.go:141] libmachine: (old-k8s-version-150971) Ensuring networks are active...
	I0130 20:38:46.160650   45819 main.go:141] libmachine: (old-k8s-version-150971) Ensuring network default is active
	I0130 20:38:46.160960   45819 main.go:141] libmachine: (old-k8s-version-150971) Ensuring network mk-old-k8s-version-150971 is active
	I0130 20:38:46.161374   45819 main.go:141] libmachine: (old-k8s-version-150971) Getting domain xml...
	I0130 20:38:46.162142   45819 main.go:141] libmachine: (old-k8s-version-150971) Creating domain...
	I0130 20:38:47.490526   45819 main.go:141] libmachine: (old-k8s-version-150971) Waiting to get IP...
	I0130 20:38:47.491491   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:47.491971   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:47.492059   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:47.491949   46425 retry.go:31] will retry after 201.906522ms: waiting for machine to come up
	I0130 20:38:47.695709   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:47.696195   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:47.696226   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:47.696146   46425 retry.go:31] will retry after 347.547284ms: waiting for machine to come up
	I0130 20:38:48.045541   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:48.046078   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:48.046102   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:48.046013   46425 retry.go:31] will retry after 373.23424ms: waiting for machine to come up
	I0130 20:38:48.420618   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:48.421238   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:48.421263   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:48.421188   46425 retry.go:31] will retry after 515.166265ms: waiting for machine to come up
	I0130 20:38:48.937713   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:48.942554   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:48.942581   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:48.942448   46425 retry.go:31] will retry after 626.563548ms: waiting for machine to come up
	I0130 20:38:49.570078   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:49.570658   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:49.570689   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:49.570550   46425 retry.go:31] will retry after 618.022034ms: waiting for machine to come up
	I0130 20:38:48.786797   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:38:48.786825   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:38:48.786848   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:48.837579   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:38:48.837608   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:38:49.198568   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:49.206091   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:38:49.206135   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:38:49.697669   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:49.707878   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:38:49.707912   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:38:50.198039   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:50.209003   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 200:
	ok
	I0130 20:38:50.228887   45037 api_server.go:141] control plane version: v1.28.4
	I0130 20:38:50.228967   45037 api_server.go:131] duration metric: took 6.031436808s to wait for apiserver health ...
	I0130 20:38:50.228981   45037 cni.go:84] Creating CNI manager for ""
	I0130 20:38:50.228991   45037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:38:50.230543   45037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:38:47.649943   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:47.653185   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:47.653623   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:47.653664   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:47.653933   45441 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0130 20:38:47.659385   45441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:47.675851   45441 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 20:38:47.675918   45441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:47.724799   45441 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 20:38:47.724883   45441 ssh_runner.go:195] Run: which lz4
	I0130 20:38:47.729563   45441 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 20:38:47.735015   45441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:38:47.735048   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 20:38:49.612191   45441 crio.go:444] Took 1.882668 seconds to copy over tarball
	I0130 20:38:49.612263   45441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 20:38:50.231895   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:38:50.262363   45037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:38:50.290525   45037 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:38:50.307654   45037 system_pods.go:59] 8 kube-system pods found
	I0130 20:38:50.307696   45037 system_pods.go:61] "coredns-5dd5756b68-jqzzv" [59f362b6-606e-4bcd-b5eb-c8822aaf8b9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:38:50.307708   45037 system_pods.go:61] "etcd-embed-certs-208583" [798094bf-2aac-4f39-afc1-4f873bdd08ee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 20:38:50.307721   45037 system_pods.go:61] "kube-apiserver-embed-certs-208583" [b96b9f6e-b36a-47bf-8f6d-01f883501766] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 20:38:50.307736   45037 system_pods.go:61] "kube-controller-manager-embed-certs-208583" [3dbd9e29-5c95-40f5-acd8-9767f6ce7a03] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 20:38:50.307751   45037 system_pods.go:61] "kube-proxy-g7q5t" [47f109e0-7a56-472f-8c7e-ba2b138de352] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 20:38:50.307760   45037 system_pods.go:61] "kube-scheduler-embed-certs-208583" [e8a37eb1-599f-478f-bbc1-b44b1020f291] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 20:38:50.307769   45037 system_pods.go:61] "metrics-server-57f55c9bc5-ghg9n" [37700115-83e9-440a-b396-56f50adb6311] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:38:50.307788   45037 system_pods.go:61] "storage-provisioner" [15108916-a630-4208-99f7-5706db407b22] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:38:50.307810   45037 system_pods.go:74] duration metric: took 17.261001ms to wait for pod list to return data ...
	I0130 20:38:50.307820   45037 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:38:50.317889   45037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:38:50.317926   45037 node_conditions.go:123] node cpu capacity is 2
	I0130 20:38:50.317939   45037 node_conditions.go:105] duration metric: took 10.11037ms to run NodePressure ...
	I0130 20:38:50.317960   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:50.681835   45037 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:38:50.688460   45037 kubeadm.go:787] kubelet initialised
	I0130 20:38:50.688488   45037 kubeadm.go:788] duration metric: took 6.61921ms waiting for restarted kubelet to initialise ...
	I0130 20:38:50.688498   45037 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:38:50.696051   45037 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.703680   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.703713   45037 pod_ready.go:81] duration metric: took 7.634057ms waiting for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.703724   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.703739   45037 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.710192   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "etcd-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.710216   45037 pod_ready.go:81] duration metric: took 6.467699ms waiting for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.710227   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "etcd-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.710235   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.720866   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.720894   45037 pod_ready.go:81] duration metric: took 10.648867ms waiting for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.720906   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.720914   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.731095   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.731162   45037 pod_ready.go:81] duration metric: took 10.237453ms waiting for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.731181   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.731190   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:51.097357   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-proxy-g7q5t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.097391   45037 pod_ready.go:81] duration metric: took 366.190232ms waiting for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:51.097404   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-proxy-g7q5t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.097413   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:51.499223   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.499261   45037 pod_ready.go:81] duration metric: took 401.839475ms waiting for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:51.499293   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.499303   45037 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:51.895725   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.895779   45037 pod_ready.go:81] duration metric: took 396.460908ms waiting for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:51.895798   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.895811   45037 pod_ready.go:38] duration metric: took 1.207302604s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:38:51.895836   45037 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:38:51.909431   45037 ops.go:34] apiserver oom_adj: -16
	I0130 20:38:51.909454   45037 kubeadm.go:640] restartCluster took 21.337960534s
	I0130 20:38:51.909472   45037 kubeadm.go:406] StartCluster complete in 21.386877314s
	I0130 20:38:51.909491   45037 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:51.909571   45037 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:38:51.911558   45037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:51.911793   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:38:51.911888   45037 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:38:51.911974   45037 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-208583"
	I0130 20:38:51.911995   45037 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-208583"
	W0130 20:38:51.912007   45037 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:38:51.912044   45037 config.go:182] Loaded profile config "embed-certs-208583": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:38:51.912101   45037 host.go:66] Checking if "embed-certs-208583" exists ...
	I0130 20:38:51.912138   45037 addons.go:69] Setting default-storageclass=true in profile "embed-certs-208583"
	I0130 20:38:51.912168   45037 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-208583"
	I0130 20:38:51.912131   45037 addons.go:69] Setting metrics-server=true in profile "embed-certs-208583"
	I0130 20:38:51.912238   45037 addons.go:234] Setting addon metrics-server=true in "embed-certs-208583"
	W0130 20:38:51.912250   45037 addons.go:243] addon metrics-server should already be in state true
	I0130 20:38:51.912328   45037 host.go:66] Checking if "embed-certs-208583" exists ...
	I0130 20:38:51.912537   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.912561   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.912583   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.912603   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.912686   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.912711   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.923647   45037 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-208583" context rescaled to 1 replicas
	I0130 20:38:51.923691   45037 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.63 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:38:51.926120   45037 out.go:177] * Verifying Kubernetes components...
	I0130 20:38:51.929413   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:38:51.930498   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I0130 20:38:51.930911   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0130 20:38:51.931075   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.931580   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.931988   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.932001   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.932296   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.932730   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.932756   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.933221   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.933273   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.933917   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.934492   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.934524   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.936079   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42667
	I0130 20:38:51.936488   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.937121   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.937144   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.937525   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.937703   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.941576   45037 addons.go:234] Setting addon default-storageclass=true in "embed-certs-208583"
	W0130 20:38:51.941597   45037 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:38:51.941623   45037 host.go:66] Checking if "embed-certs-208583" exists ...
	I0130 20:38:51.942033   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.942072   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.953268   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44577
	I0130 20:38:51.953715   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43785
	I0130 20:38:51.953863   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.954633   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.954659   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.954742   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.955212   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.955233   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.955318   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.955530   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.955663   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.955853   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.957839   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:51.958080   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:51.960896   45037 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:38:51.961493   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37549
	I0130 20:38:51.962677   45037 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:38:51.962838   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:38:51.964463   45037 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:38:51.964487   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:38:51.964518   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:51.964486   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:38:51.964554   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:51.963107   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.965261   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.965274   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.965656   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.966482   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.966520   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.968651   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.969034   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:51.969062   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.969307   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:51.969493   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:51.969580   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.969656   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:51.969809   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:51.970328   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:51.970372   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:51.970391   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.970521   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:51.970706   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:51.970866   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:51.985009   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33297
	I0130 20:38:51.985512   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.986096   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.986119   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.986558   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.986778   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.988698   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:51.991566   45037 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:38:51.991620   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:38:51.991647   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:51.994416   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.995367   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:51.995370   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:51.995439   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.995585   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:51.995740   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:51.995885   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:52.125074   45037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:38:52.140756   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:38:52.140787   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:38:52.180728   45037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:38:52.195559   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:38:52.195587   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:38:52.235770   45037 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0130 20:38:52.235779   45037 node_ready.go:35] waiting up to 6m0s for node "embed-certs-208583" to be "Ready" ...
	I0130 20:38:52.243414   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:38:52.243444   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:38:52.349604   45037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:38:54.111857   45037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.931041237s)
	I0130 20:38:54.111916   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.111938   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112013   45037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.986903299s)
	I0130 20:38:54.112051   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.112065   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112337   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112383   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112398   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.112403   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112411   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.112421   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.112426   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112434   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.112423   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112450   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112653   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112728   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112748   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.112770   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112797   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112806   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.119872   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.119893   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.120118   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.120138   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.121373   45037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.771724991s)
	I0130 20:38:54.121408   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.121421   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.121619   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.121636   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.121647   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.121655   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.121837   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.121853   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.121875   45037 addons.go:470] Verifying addon metrics-server=true in "embed-certs-208583"
	I0130 20:38:54.332655   45037 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 20:38:50.189837   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:50.190326   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:50.190352   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:50.190273   46425 retry.go:31] will retry after 843.505616ms: waiting for machine to come up
	I0130 20:38:51.035080   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:51.035482   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:51.035511   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:51.035454   46425 retry.go:31] will retry after 1.230675294s: waiting for machine to come up
	I0130 20:38:52.267754   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:52.268342   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:52.268365   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:52.268298   46425 retry.go:31] will retry after 1.516187998s: waiting for machine to come up
	I0130 20:38:53.785734   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:53.786142   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:53.786173   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:53.786084   46425 retry.go:31] will retry after 2.020274977s: waiting for machine to come up
	I0130 20:38:53.002777   45441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.390479779s)
	I0130 20:38:53.002812   45441 crio.go:451] Took 3.390595 seconds to extract the tarball
	I0130 20:38:53.002824   45441 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 20:38:53.059131   45441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:53.121737   45441 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 20:38:53.121765   45441 cache_images.go:84] Images are preloaded, skipping loading
	I0130 20:38:53.121837   45441 ssh_runner.go:195] Run: crio config
	I0130 20:38:53.187904   45441 cni.go:84] Creating CNI manager for ""
	I0130 20:38:53.187931   45441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:38:53.187953   45441 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:38:53.187982   45441 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.52 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-877742 NodeName:default-k8s-diff-port-877742 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:38:53.188157   45441 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.52
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-877742"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.52
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.52"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:38:53.188253   45441 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-877742 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-877742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0130 20:38:53.188320   45441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 20:38:53.200851   45441 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:38:53.200938   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:38:53.212897   45441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0130 20:38:53.231805   45441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:38:53.253428   45441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0130 20:38:53.274041   45441 ssh_runner.go:195] Run: grep 192.168.72.52	control-plane.minikube.internal$ /etc/hosts
	I0130 20:38:53.278499   45441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.52	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:53.295089   45441 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742 for IP: 192.168.72.52
	I0130 20:38:53.295126   45441 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:53.295326   45441 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:38:53.295393   45441 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:38:53.295497   45441 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.key
	I0130 20:38:53.295581   45441 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/apiserver.key.02e1fdc8
	I0130 20:38:53.295637   45441 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/proxy-client.key
	I0130 20:38:53.295774   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:38:53.295813   45441 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:38:53.295827   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:38:53.295864   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:38:53.295912   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:38:53.295948   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:38:53.296012   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:53.296828   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:38:53.326150   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 20:38:53.356286   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:38:53.384496   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 20:38:53.414403   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:38:53.440628   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:38:53.465452   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:38:53.494321   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:38:53.520528   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:38:53.543933   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:38:53.569293   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:38:53.594995   45441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:38:53.615006   45441 ssh_runner.go:195] Run: openssl version
	I0130 20:38:53.622442   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:38:53.636482   45441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:38:53.642501   45441 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:38:53.642572   45441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:38:53.649251   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:38:53.661157   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:38:53.673453   45441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:53.678369   45441 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:53.678439   45441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:53.684812   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:38:53.696906   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:38:53.710065   45441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:38:53.714715   45441 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:38:53.714776   45441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:38:53.720458   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:38:53.733050   45441 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:38:53.737894   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:38:53.744337   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:38:53.750326   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:38:53.756139   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:38:53.761883   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:38:53.767633   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:38:53.773367   45441 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-877742 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-877742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.52 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:38:53.773480   45441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:38:53.773551   45441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:38:53.815095   45441 cri.go:89] found id: ""
	I0130 20:38:53.815159   45441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:38:53.826497   45441 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:38:53.826521   45441 kubeadm.go:636] restartCluster start
	I0130 20:38:53.826570   45441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:38:53.837155   45441 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:53.838622   45441 kubeconfig.go:92] found "default-k8s-diff-port-877742" server: "https://192.168.72.52:8444"
	I0130 20:38:53.841776   45441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:38:53.852124   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:53.852191   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:53.864432   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:54.353064   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:54.353141   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:54.365422   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:54.853083   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:54.853170   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:54.869932   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:55.352281   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:55.352360   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:55.369187   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:54.550999   45037 addons.go:505] enable addons completed in 2.639107358s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 20:38:54.692017   45037 node_ready.go:58] node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:56.740251   45037 node_ready.go:58] node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:55.809310   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:55.809708   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:55.809741   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:55.809655   46425 retry.go:31] will retry after 1.997080797s: waiting for machine to come up
	I0130 20:38:57.808397   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:57.808798   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:57.808829   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:57.808744   46425 retry.go:31] will retry after 3.605884761s: waiting for machine to come up
	I0130 20:38:55.852241   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:55.852356   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:55.864923   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:56.352455   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:56.352559   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:56.368458   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:56.853090   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:56.853175   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:56.869148   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:57.352965   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:57.353055   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:57.370697   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:57.852261   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:57.852391   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:57.868729   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:58.352147   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:58.352250   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:58.368543   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:58.852300   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:58.852373   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:58.868594   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:59.353039   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:59.353110   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:59.365593   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:59.852147   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:59.852276   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:59.865561   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:00.353077   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:00.353186   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:00.370006   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:59.242842   45037 node_ready.go:58] node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:59.739830   45037 node_ready.go:49] node "embed-certs-208583" has status "Ready":"True"
	I0130 20:38:59.739851   45037 node_ready.go:38] duration metric: took 7.503983369s waiting for node "embed-certs-208583" to be "Ready" ...
	I0130 20:38:59.739859   45037 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:38:59.746243   45037 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.751722   45037 pod_ready.go:92] pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace has status "Ready":"True"
	I0130 20:38:59.751745   45037 pod_ready.go:81] duration metric: took 5.480115ms waiting for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.751752   45037 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.757152   45037 pod_ready.go:92] pod "etcd-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:38:59.757175   45037 pod_ready.go:81] duration metric: took 5.417291ms waiting for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.757184   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.762156   45037 pod_ready.go:92] pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:38:59.762231   45037 pod_ready.go:81] duration metric: took 4.985076ms waiting for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.762267   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:01.773853   45037 pod_ready.go:102] pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:01.415831   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:01.416304   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:39:01.416345   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:39:01.416273   46425 retry.go:31] will retry after 3.558433109s: waiting for machine to come up
	I0130 20:39:00.852444   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:00.852545   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:00.865338   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:01.353002   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:01.353101   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:01.366419   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:01.853034   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:01.853114   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:01.866142   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:02.352652   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:02.352752   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:02.364832   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:02.852325   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:02.852406   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:02.864013   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:03.352408   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:03.352518   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:03.363939   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:03.853126   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:03.853200   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:03.865047   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:03.865084   45441 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:39:03.865094   45441 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:39:03.865105   45441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:39:03.865154   45441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:03.904863   45441 cri.go:89] found id: ""
	I0130 20:39:03.904932   45441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:39:03.922225   45441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:39:03.931861   45441 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:39:03.931915   45441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:03.941185   45441 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:03.941205   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.064230   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.627940   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.816900   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.893059   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.986288   45441 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:39:04.986362   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:06.448368   44923 start.go:369] acquired machines lock for "no-preload-473743" in 58.134425603s
	I0130 20:39:06.448435   44923 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:39:06.448443   44923 fix.go:54] fixHost starting: 
	I0130 20:39:06.448866   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:39:06.448900   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:39:06.468570   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43389
	I0130 20:39:06.468965   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:39:06.469552   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:39:06.469587   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:39:06.469950   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:39:06.470153   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:06.470312   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:39:06.472312   44923 fix.go:102] recreateIfNeeded on no-preload-473743: state=Stopped err=<nil>
	I0130 20:39:06.472337   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	W0130 20:39:06.472495   44923 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:39:06.474460   44923 out.go:177] * Restarting existing kvm2 VM for "no-preload-473743" ...
	I0130 20:39:04.976314   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.976787   45819 main.go:141] libmachine: (old-k8s-version-150971) Found IP for machine: 192.168.39.16
	I0130 20:39:04.976820   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has current primary IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.976830   45819 main.go:141] libmachine: (old-k8s-version-150971) Reserving static IP address...
	I0130 20:39:04.977271   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "old-k8s-version-150971", mac: "52:54:00:6e:fe:f8", ip: "192.168.39.16"} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:04.977300   45819 main.go:141] libmachine: (old-k8s-version-150971) Reserved static IP address: 192.168.39.16
	I0130 20:39:04.977325   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | skip adding static IP to network mk-old-k8s-version-150971 - found existing host DHCP lease matching {name: "old-k8s-version-150971", mac: "52:54:00:6e:fe:f8", ip: "192.168.39.16"}
	I0130 20:39:04.977345   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Getting to WaitForSSH function...
	I0130 20:39:04.977361   45819 main.go:141] libmachine: (old-k8s-version-150971) Waiting for SSH to be available...
	I0130 20:39:04.979621   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.980015   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:04.980042   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.980138   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Using SSH client type: external
	I0130 20:39:04.980164   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa (-rw-------)
	I0130 20:39:04.980206   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:39:04.980221   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | About to run SSH command:
	I0130 20:39:04.980259   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | exit 0
	I0130 20:39:05.079758   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | SSH cmd err, output: <nil>: 
	I0130 20:39:05.080092   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetConfigRaw
	I0130 20:39:05.080846   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:05.083636   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.084019   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.084062   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.084354   45819 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/config.json ...
	I0130 20:39:05.084608   45819 machine.go:88] provisioning docker machine ...
	I0130 20:39:05.084635   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:05.084845   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetMachineName
	I0130 20:39:05.085031   45819 buildroot.go:166] provisioning hostname "old-k8s-version-150971"
	I0130 20:39:05.085063   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetMachineName
	I0130 20:39:05.085221   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.087561   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.087930   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.087963   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.088067   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.088220   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.088384   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.088550   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.088736   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:05.089124   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:05.089141   45819 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-150971 && echo "old-k8s-version-150971" | sudo tee /etc/hostname
	I0130 20:39:05.232496   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-150971
	
	I0130 20:39:05.232528   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.234898   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.235190   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.235227   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.235310   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.235515   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.235655   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.235791   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.235921   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:05.236233   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:05.236251   45819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-150971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-150971/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-150971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:39:05.370716   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:39:05.370753   45819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:39:05.370774   45819 buildroot.go:174] setting up certificates
	I0130 20:39:05.370787   45819 provision.go:83] configureAuth start
	I0130 20:39:05.370800   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetMachineName
	I0130 20:39:05.371158   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:05.373602   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.373946   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.373970   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.374153   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.376230   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.376617   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.376657   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.376763   45819 provision.go:138] copyHostCerts
	I0130 20:39:05.376816   45819 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:39:05.376826   45819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:39:05.376892   45819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:39:05.377066   45819 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:39:05.377079   45819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:39:05.377113   45819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:39:05.377205   45819 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:39:05.377216   45819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:39:05.377243   45819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:39:05.377336   45819 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-150971 san=[192.168.39.16 192.168.39.16 localhost 127.0.0.1 minikube old-k8s-version-150971]
	I0130 20:39:05.649128   45819 provision.go:172] copyRemoteCerts
	I0130 20:39:05.649183   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:39:05.649206   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.652019   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.652353   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.652385   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.652657   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.652857   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.653048   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.653207   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:05.753981   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0130 20:39:05.782847   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 20:39:05.810083   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:39:05.836967   45819 provision.go:86] duration metric: configureAuth took 466.16712ms
	I0130 20:39:05.836989   45819 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:39:05.837156   45819 config.go:182] Loaded profile config "old-k8s-version-150971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 20:39:05.837222   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.840038   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.840384   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.840422   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.840597   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.840832   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.841019   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.841167   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.841338   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:05.841681   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:05.841700   45819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:39:06.170121   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:39:06.170151   45819 machine.go:91] provisioned docker machine in 1.08552444s
	I0130 20:39:06.170163   45819 start.go:300] post-start starting for "old-k8s-version-150971" (driver="kvm2")
	I0130 20:39:06.170183   45819 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:39:06.170202   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.170544   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:39:06.170583   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.173794   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.174165   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.174192   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.174421   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.174620   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.174804   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.174947   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:06.273272   45819 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:39:06.277900   45819 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:39:06.277928   45819 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:39:06.278010   45819 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:39:06.278099   45819 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:39:06.278207   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:39:06.286905   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:06.311772   45819 start.go:303] post-start completed in 141.592454ms
	I0130 20:39:06.311808   45819 fix.go:56] fixHost completed within 20.175639407s
	I0130 20:39:06.311832   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.314627   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.314998   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.315027   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.315179   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.315402   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.315585   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.315758   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.315936   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:06.316254   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:06.316269   45819 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:39:06.448193   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647146.389757507
	
	I0130 20:39:06.448219   45819 fix.go:206] guest clock: 1706647146.389757507
	I0130 20:39:06.448230   45819 fix.go:219] Guest: 2024-01-30 20:39:06.389757507 +0000 UTC Remote: 2024-01-30 20:39:06.311812895 +0000 UTC m=+176.717060563 (delta=77.944612ms)
	I0130 20:39:06.448277   45819 fix.go:190] guest clock delta is within tolerance: 77.944612ms
	I0130 20:39:06.448285   45819 start.go:83] releasing machines lock for "old-k8s-version-150971", held for 20.312150878s
	I0130 20:39:06.448318   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.448584   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:06.451978   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.452448   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.452475   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.452632   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.453188   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.453364   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.453450   45819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:39:06.453501   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.453604   45819 ssh_runner.go:195] Run: cat /version.json
	I0130 20:39:06.453622   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.456426   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.456694   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.456722   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.456743   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.457015   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.457218   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.457228   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.457266   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.457473   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.457483   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.457648   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.457657   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:06.457834   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.457945   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:06.575025   45819 ssh_runner.go:195] Run: systemctl --version
	I0130 20:39:06.580884   45819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:39:06.730119   45819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:39:06.737872   45819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:39:06.737945   45819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:39:06.752952   45819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:39:06.752987   45819 start.go:475] detecting cgroup driver to use...
	I0130 20:39:06.753062   45819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:39:06.772925   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:39:06.787880   45819 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:39:06.787957   45819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:39:06.805662   45819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:39:06.820819   45819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:39:06.941809   45819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:39:07.067216   45819 docker.go:233] disabling docker service ...
	I0130 20:39:07.067299   45819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:39:07.084390   45819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:39:07.099373   45819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:39:07.242239   45819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:39:07.378297   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:39:07.390947   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:39:07.414177   45819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0130 20:39:07.414256   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.427074   45819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:39:07.427154   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.439058   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.451547   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.462473   45819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:39:07.474082   45819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:39:07.484883   45819 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:39:07.484943   45819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:39:07.502181   45819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:39:07.511315   45819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:39:07.677114   45819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:39:07.878176   45819 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:39:07.878247   45819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:39:07.885855   45819 start.go:543] Will wait 60s for crictl version
	I0130 20:39:07.885918   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:07.895480   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:39:07.946256   45819 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:39:07.946344   45819 ssh_runner.go:195] Run: crio --version
	I0130 20:39:07.999647   45819 ssh_runner.go:195] Run: crio --version
	I0130 20:39:08.064335   45819 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0130 20:39:04.270868   45037 pod_ready.go:92] pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:04.270900   45037 pod_ready.go:81] duration metric: took 4.508624463s waiting for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.270911   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.276806   45037 pod_ready.go:92] pod "kube-proxy-g7q5t" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:04.276830   45037 pod_ready.go:81] duration metric: took 5.914142ms waiting for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.276839   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.283207   45037 pod_ready.go:92] pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:04.283225   45037 pod_ready.go:81] duration metric: took 6.380407ms waiting for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.283235   45037 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:06.291591   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:08.318273   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:08.065754   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:08.068986   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:08.069433   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:08.069477   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:08.069665   45819 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 20:39:08.074101   45819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:08.088404   45819 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 20:39:08.088468   45819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:39:08.133749   45819 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0130 20:39:08.133831   45819 ssh_runner.go:195] Run: which lz4
	I0130 20:39:08.138114   45819 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 20:39:08.142668   45819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:39:08.142709   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0130 20:39:05.487066   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:05.987386   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:06.486465   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:06.987491   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:07.486540   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:07.518826   45441 api_server.go:72] duration metric: took 2.532536561s to wait for apiserver process to appear ...
	I0130 20:39:07.518852   45441 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:39:07.518875   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:06.475902   44923 main.go:141] libmachine: (no-preload-473743) Calling .Start
	I0130 20:39:06.476095   44923 main.go:141] libmachine: (no-preload-473743) Ensuring networks are active...
	I0130 20:39:06.476929   44923 main.go:141] libmachine: (no-preload-473743) Ensuring network default is active
	I0130 20:39:06.477344   44923 main.go:141] libmachine: (no-preload-473743) Ensuring network mk-no-preload-473743 is active
	I0130 20:39:06.477817   44923 main.go:141] libmachine: (no-preload-473743) Getting domain xml...
	I0130 20:39:06.478643   44923 main.go:141] libmachine: (no-preload-473743) Creating domain...
	I0130 20:39:07.834909   44923 main.go:141] libmachine: (no-preload-473743) Waiting to get IP...
	I0130 20:39:07.835906   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:07.836320   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:07.836371   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:07.836287   46613 retry.go:31] will retry after 205.559104ms: waiting for machine to come up
	I0130 20:39:08.043926   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:08.044522   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:08.044607   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:08.044570   46613 retry.go:31] will retry after 291.055623ms: waiting for machine to come up
	I0130 20:39:08.337157   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:08.337756   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:08.337859   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:08.337823   46613 retry.go:31] will retry after 462.903788ms: waiting for machine to come up
	I0130 20:39:08.802588   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:08.803397   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:08.803497   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:08.803459   46613 retry.go:31] will retry after 497.808285ms: waiting for machine to come up
	I0130 20:39:09.303349   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:09.304015   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:09.304037   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:09.303936   46613 retry.go:31] will retry after 569.824748ms: waiting for machine to come up
	I0130 20:39:09.875816   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:09.876316   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:09.876348   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:09.876259   46613 retry.go:31] will retry after 589.654517ms: waiting for machine to come up
	I0130 20:39:10.467029   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:10.467568   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:10.467601   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:10.467520   46613 retry.go:31] will retry after 857.069247ms: waiting for machine to come up
	I0130 20:39:10.796542   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:13.290072   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:09.980254   45819 crio.go:444] Took 1.842164 seconds to copy over tarball
	I0130 20:39:09.980328   45819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 20:39:13.116148   45819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.135783447s)
	I0130 20:39:13.116184   45819 crio.go:451] Took 3.135904 seconds to extract the tarball
	I0130 20:39:13.116196   45819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 20:39:13.161285   45819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:39:13.226970   45819 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0130 20:39:13.227008   45819 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 20:39:13.227096   45819 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.227151   45819 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.227153   45819 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.227173   45819 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.227121   45819 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:13.227155   45819 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0130 20:39:13.227439   45819 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.227117   45819 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.229003   45819 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.229038   45819 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:13.229065   45819 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.229112   45819 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.229011   45819 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0130 20:39:13.229170   45819 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.229177   45819 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.229217   45819 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.443441   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.484878   45819 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0130 20:39:13.484941   45819 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.485021   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.489291   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.526847   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.526966   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.527312   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0130 20:39:13.528949   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.532002   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0130 20:39:13.532309   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.532701   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.662312   45819 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0130 20:39:13.662355   45819 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.662422   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.669155   45819 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0130 20:39:13.669201   45819 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.669265   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708339   45819 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0130 20:39:13.708373   45819 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0130 20:39:13.708398   45819 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0130 20:39:13.708404   45819 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.708435   45819 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0130 20:39:13.708470   45819 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.708476   45819 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0130 20:39:13.708491   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.708507   45819 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.708508   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708451   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708443   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708565   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.708549   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.767721   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.767762   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.767789   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0130 20:39:13.767835   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0130 20:39:13.767869   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0130 20:39:13.767935   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.816661   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0130 20:39:13.863740   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0130 20:39:13.863751   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0130 20:39:13.863798   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0130 20:39:14.096216   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:14.241457   45819 cache_images.go:92] LoadImages completed in 1.014424533s
	W0130 20:39:14.241562   45819 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0130 20:39:14.241641   45819 ssh_runner.go:195] Run: crio config
	I0130 20:39:14.307624   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:39:14.307644   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:14.307673   45819 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:39:14.307696   45819 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-150971 NodeName:old-k8s-version-150971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0130 20:39:14.307866   45819 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-150971"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-150971
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.16:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:39:14.307973   45819 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-150971 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:39:14.308042   45819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0130 20:39:14.318757   45819 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:39:14.318830   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:39:14.329640   45819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0130 20:39:14.347498   45819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:39:14.365403   45819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0130 20:39:14.383846   45819 ssh_runner.go:195] Run: grep 192.168.39.16	control-plane.minikube.internal$ /etc/hosts
	I0130 20:39:14.388138   45819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:14.402420   45819 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971 for IP: 192.168.39.16
	I0130 20:39:14.402483   45819 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:39:14.402661   45819 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:39:14.402707   45819 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:39:14.402780   45819 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.key
	I0130 20:39:14.402837   45819 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/apiserver.key.5918fcb3
	I0130 20:39:14.402877   45819 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/proxy-client.key
	I0130 20:39:14.403025   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:39:14.403076   45819 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:39:14.403094   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:39:14.403131   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:39:14.403171   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:39:14.403206   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:39:14.403290   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:14.404157   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:39:14.430902   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 20:39:14.454554   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:39:14.482335   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 20:39:14.505963   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:39:14.532616   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:39:14.558930   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:39:14.585784   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:39:14.609214   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:39:14.635743   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:39:12.268901   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:12.268934   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:12.268948   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:12.307051   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:12.307093   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:12.519619   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:12.530857   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:12.530904   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:13.019370   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:13.024544   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:13.024577   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:13.519023   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:13.525748   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:13.525784   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:14.019318   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:14.026570   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:14.026600   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:14.519000   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:15.074306   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:15.074341   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:15.074353   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:15.081035   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:15.081075   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:11.325993   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:11.326475   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:11.326506   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:11.326439   46613 retry.go:31] will retry after 994.416536ms: waiting for machine to come up
	I0130 20:39:12.323190   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:12.323897   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:12.323924   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:12.323807   46613 retry.go:31] will retry after 1.746704262s: waiting for machine to come up
	I0130 20:39:14.072583   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:14.073100   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:14.073158   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:14.073072   46613 retry.go:31] will retry after 2.322781715s: waiting for machine to come up
	I0130 20:39:15.519005   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:15.609496   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:15.609529   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:16.018990   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:16.024390   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 200:
	ok
	I0130 20:39:16.037151   45441 api_server.go:141] control plane version: v1.28.4
	I0130 20:39:16.037191   45441 api_server.go:131] duration metric: took 8.518327222s to wait for apiserver health ...
	I0130 20:39:16.037203   45441 cni.go:84] Creating CNI manager for ""
	I0130 20:39:16.037211   45441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:16.039114   45441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:39:15.290788   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:17.292552   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:14.662372   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:39:14.814291   45819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:39:14.832453   45819 ssh_runner.go:195] Run: openssl version
	I0130 20:39:14.838238   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:39:14.848628   45819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:39:14.853713   45819 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:39:14.853761   45819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:39:14.859768   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:39:14.870658   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:39:14.881444   45819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:14.886241   45819 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:14.886290   45819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:14.892197   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:39:14.903459   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:39:14.914463   45819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:39:14.919337   45819 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:39:14.919413   45819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:39:14.925258   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:39:14.935893   45819 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:39:14.941741   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:39:14.948871   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:39:14.955038   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:39:14.961605   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:39:14.967425   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:39:14.973845   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:39:14.980072   45819 kubeadm.go:404] StartCluster: {Name:old-k8s-version-150971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:39:14.980218   45819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:39:14.980265   45819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:15.021821   45819 cri.go:89] found id: ""
	I0130 20:39:15.021920   45819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:39:15.033604   45819 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:39:15.033629   45819 kubeadm.go:636] restartCluster start
	I0130 20:39:15.033686   45819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:39:15.044324   45819 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:15.045356   45819 kubeconfig.go:92] found "old-k8s-version-150971" server: "https://192.168.39.16:8443"
	I0130 20:39:15.047610   45819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:39:15.057690   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:15.057746   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:15.073207   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:15.558392   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:15.558480   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:15.574711   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:16.057794   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:16.057971   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:16.073882   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:16.557808   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:16.557879   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:16.571659   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:17.057817   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:17.057922   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:17.074250   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:17.557727   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:17.557809   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:17.573920   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:18.058504   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:18.058573   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:18.070636   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:18.558163   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:18.558262   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:18.570781   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:19.058321   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:19.058414   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:19.074887   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:19.558503   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:19.558596   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:19.570666   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:16.040606   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:39:16.065469   45441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:39:16.099751   45441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:39:16.113444   45441 system_pods.go:59] 8 kube-system pods found
	I0130 20:39:16.113486   45441 system_pods.go:61] "coredns-5dd5756b68-2955f" [abae9f5c-ed48-494b-b014-a28f6290d772] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:39:16.113498   45441 system_pods.go:61] "etcd-default-k8s-diff-port-877742" [0f69a8d9-5549-4f3a-8b12-ee9e96e08271] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 20:39:16.113509   45441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-877742" [ab6cf2c3-cc75-44b8-b039-6e21881a9ade] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 20:39:16.113519   45441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-877742" [4b313734-cd1e-4229-afcd-4d0b517594ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 20:39:16.113533   45441 system_pods.go:61] "kube-proxy-s9ssn" [ea1c69e6-d319-41ee-a47f-4937f03ecdc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 20:39:16.113549   45441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-877742" [3f4d9e5f-1421-4576-839b-3bdfba56700b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 20:39:16.113566   45441 system_pods.go:61] "metrics-server-57f55c9bc5-hzfwg" [1e06ac92-f7ff-418a-9a8d-72d763566bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:39:16.113582   45441 system_pods.go:61] "storage-provisioner" [4cf793ab-e7a5-4a51-bcb9-a07bea323a44] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:39:16.113599   45441 system_pods.go:74] duration metric: took 13.827445ms to wait for pod list to return data ...
	I0130 20:39:16.113608   45441 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:39:16.121786   45441 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:39:16.121882   45441 node_conditions.go:123] node cpu capacity is 2
	I0130 20:39:16.121904   45441 node_conditions.go:105] duration metric: took 8.289345ms to run NodePressure ...
	I0130 20:39:16.121929   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:16.440112   45441 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:39:16.447160   45441 kubeadm.go:787] kubelet initialised
	I0130 20:39:16.447188   45441 kubeadm.go:788] duration metric: took 7.04624ms waiting for restarted kubelet to initialise ...
	I0130 20:39:16.447198   45441 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:39:16.457164   45441 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-2955f" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:16.463990   45441 pod_ready.go:97] node "default-k8s-diff-port-877742" hosting pod "coredns-5dd5756b68-2955f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.464020   45441 pod_ready.go:81] duration metric: took 6.825543ms waiting for pod "coredns-5dd5756b68-2955f" in "kube-system" namespace to be "Ready" ...
	E0130 20:39:16.464033   45441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-877742" hosting pod "coredns-5dd5756b68-2955f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.464044   45441 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:16.476983   45441 pod_ready.go:97] node "default-k8s-diff-port-877742" hosting pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.477077   45441 pod_ready.go:81] duration metric: took 12.988392ms waiting for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	E0130 20:39:16.477109   45441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-877742" hosting pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.477128   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:18.486083   45441 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:16.397588   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:16.398050   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:16.398082   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:16.397988   46613 retry.go:31] will retry after 2.411227582s: waiting for machine to come up
	I0130 20:39:18.810874   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:18.811404   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:18.811439   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:18.811358   46613 retry.go:31] will retry after 2.231016506s: waiting for machine to come up
	I0130 20:39:19.296383   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:21.790307   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:20.058718   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:20.058800   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:20.074443   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:20.558683   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:20.558756   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:20.574765   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:21.058367   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:21.058456   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:21.074652   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:21.558528   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:21.558648   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:21.573650   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:22.058161   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:22.058280   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:22.070780   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:22.558448   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:22.558525   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:22.572220   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:23.057797   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:23.057884   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:23.071260   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:23.558193   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:23.558278   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:23.571801   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:24.058483   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:24.058603   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:24.070898   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:24.558465   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:24.558546   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:24.573966   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:21.008056   45441 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:23.484244   45441 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:23.987592   45441 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:23.987615   45441 pod_ready.go:81] duration metric: took 7.510477497s waiting for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.987624   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.993335   45441 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:23.993358   45441 pod_ready.go:81] duration metric: took 5.726687ms waiting for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.993373   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s9ssn" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.998021   45441 pod_ready.go:92] pod "kube-proxy-s9ssn" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:23.998045   45441 pod_ready.go:81] duration metric: took 4.664039ms waiting for pod "kube-proxy-s9ssn" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.998057   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:21.044853   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:21.045392   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:21.045423   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:21.045336   46613 retry.go:31] will retry after 3.525646558s: waiting for machine to come up
	I0130 20:39:24.573139   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:24.573573   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:24.573596   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:24.573532   46613 retry.go:31] will retry after 4.365207536s: waiting for machine to come up
	I0130 20:39:23.790893   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:25.791630   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:28.291352   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:25.058653   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:25.058753   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:25.072061   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:25.072091   45819 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:39:25.072115   45819 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:39:25.072127   45819 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:39:25.072183   45819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:25.121788   45819 cri.go:89] found id: ""
	I0130 20:39:25.121863   45819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:39:25.137294   45819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:39:25.146157   45819 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:39:25.146213   45819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:25.155323   45819 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:25.155346   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:25.279765   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:26.617419   45819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.337617183s)
	I0130 20:39:26.617457   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:26.825384   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:26.916818   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:27.026546   45819 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:39:27.026647   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:27.527637   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:28.026724   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:28.527352   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:28.578771   45819 api_server.go:72] duration metric: took 1.552227614s to wait for apiserver process to appear ...
	I0130 20:39:28.578793   45819 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:39:28.578814   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:28.579348   45819 api_server.go:269] stopped: https://192.168.39.16:8443/healthz: Get "https://192.168.39.16:8443/healthz": dial tcp 192.168.39.16:8443: connect: connection refused
	I0130 20:39:29.078918   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:26.006018   45441 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:27.506562   45441 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:27.506596   45441 pod_ready.go:81] duration metric: took 3.50852897s waiting for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:27.506609   45441 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:29.514067   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:28.941922   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.942489   44923 main.go:141] libmachine: (no-preload-473743) Found IP for machine: 192.168.50.220
	I0130 20:39:28.942511   44923 main.go:141] libmachine: (no-preload-473743) Reserving static IP address...
	I0130 20:39:28.942528   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has current primary IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.943003   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "no-preload-473743", mac: "52:54:00:c5:07:4a", ip: "192.168.50.220"} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:28.943046   44923 main.go:141] libmachine: (no-preload-473743) DBG | skip adding static IP to network mk-no-preload-473743 - found existing host DHCP lease matching {name: "no-preload-473743", mac: "52:54:00:c5:07:4a", ip: "192.168.50.220"}
	I0130 20:39:28.943063   44923 main.go:141] libmachine: (no-preload-473743) Reserved static IP address: 192.168.50.220
	I0130 20:39:28.943081   44923 main.go:141] libmachine: (no-preload-473743) DBG | Getting to WaitForSSH function...
	I0130 20:39:28.943092   44923 main.go:141] libmachine: (no-preload-473743) Waiting for SSH to be available...
	I0130 20:39:28.945617   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.945983   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:28.946016   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.946192   44923 main.go:141] libmachine: (no-preload-473743) DBG | Using SSH client type: external
	I0130 20:39:28.946224   44923 main.go:141] libmachine: (no-preload-473743) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa (-rw-------)
	I0130 20:39:28.946257   44923 main.go:141] libmachine: (no-preload-473743) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:39:28.946268   44923 main.go:141] libmachine: (no-preload-473743) DBG | About to run SSH command:
	I0130 20:39:28.946279   44923 main.go:141] libmachine: (no-preload-473743) DBG | exit 0
	I0130 20:39:29.047127   44923 main.go:141] libmachine: (no-preload-473743) DBG | SSH cmd err, output: <nil>: 
	I0130 20:39:29.047505   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetConfigRaw
	I0130 20:39:29.048239   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:29.051059   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.051539   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.051572   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.051875   44923 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/config.json ...
	I0130 20:39:29.052098   44923 machine.go:88] provisioning docker machine ...
	I0130 20:39:29.052122   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:29.052328   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetMachineName
	I0130 20:39:29.052480   44923 buildroot.go:166] provisioning hostname "no-preload-473743"
	I0130 20:39:29.052503   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetMachineName
	I0130 20:39:29.052693   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.055532   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.055937   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.055968   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.056075   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.056267   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.056428   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.056644   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.056802   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:29.057242   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:29.057266   44923 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-473743 && echo "no-preload-473743" | sudo tee /etc/hostname
	I0130 20:39:29.199944   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-473743
	
	I0130 20:39:29.199987   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.202960   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.203402   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.203428   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.203648   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.203840   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.203974   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.204101   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.204253   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:29.204787   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:29.204815   44923 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-473743' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-473743/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-473743' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:39:29.343058   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:39:29.343090   44923 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:39:29.343118   44923 buildroot.go:174] setting up certificates
	I0130 20:39:29.343131   44923 provision.go:83] configureAuth start
	I0130 20:39:29.343154   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetMachineName
	I0130 20:39:29.343457   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:29.346265   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.346671   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.346714   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.346889   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.349402   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.349799   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.349830   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.350015   44923 provision.go:138] copyHostCerts
	I0130 20:39:29.350079   44923 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:39:29.350092   44923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:39:29.350146   44923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:39:29.350244   44923 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:39:29.350253   44923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:39:29.350277   44923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:39:29.350343   44923 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:39:29.350354   44923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:39:29.350371   44923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:39:29.350428   44923 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.no-preload-473743 san=[192.168.50.220 192.168.50.220 localhost 127.0.0.1 minikube no-preload-473743]
	I0130 20:39:29.671070   44923 provision.go:172] copyRemoteCerts
	I0130 20:39:29.671125   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:39:29.671150   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.673890   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.674199   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.674234   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.674386   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.674604   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.674744   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.674901   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:29.769184   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:39:29.797035   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 20:39:29.822932   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 20:39:29.849781   44923 provision.go:86] duration metric: configureAuth took 506.627652ms
	I0130 20:39:29.849818   44923 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:39:29.850040   44923 config.go:182] Loaded profile config "no-preload-473743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 20:39:29.850134   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.852709   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.853108   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.853137   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.853331   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.853574   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.853757   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.853924   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.854108   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:29.854635   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:29.854660   44923 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:39:30.232249   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:39:30.232288   44923 machine.go:91] provisioned docker machine in 1.180174143s
	I0130 20:39:30.232302   44923 start.go:300] post-start starting for "no-preload-473743" (driver="kvm2")
	I0130 20:39:30.232321   44923 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:39:30.232348   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.232668   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:39:30.232705   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.235383   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.235716   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.235745   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.235860   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.236049   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.236203   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.236346   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:30.332330   44923 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:39:30.337659   44923 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:39:30.337684   44923 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:39:30.337756   44923 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:39:30.337847   44923 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:39:30.337960   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:39:30.349830   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:30.374759   44923 start.go:303] post-start completed in 142.443985ms
	I0130 20:39:30.374780   44923 fix.go:56] fixHost completed within 23.926338591s
	I0130 20:39:30.374800   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.377807   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.378189   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.378244   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.378414   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.378605   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.378803   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.378954   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.379112   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:30.379649   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:30.379677   44923 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:39:30.512888   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647170.453705676
	
	I0130 20:39:30.512916   44923 fix.go:206] guest clock: 1706647170.453705676
	I0130 20:39:30.512925   44923 fix.go:219] Guest: 2024-01-30 20:39:30.453705676 +0000 UTC Remote: 2024-01-30 20:39:30.374783103 +0000 UTC m=+364.620017880 (delta=78.922573ms)
	I0130 20:39:30.512966   44923 fix.go:190] guest clock delta is within tolerance: 78.922573ms
	I0130 20:39:30.512976   44923 start.go:83] releasing machines lock for "no-preload-473743", held for 24.064563389s
	I0130 20:39:30.513083   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.513387   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:30.516359   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.516699   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.516728   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.516908   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.517590   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.517747   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.517817   44923 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:39:30.517864   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.517954   44923 ssh_runner.go:195] Run: cat /version.json
	I0130 20:39:30.517972   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.520814   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521070   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521202   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.521228   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521456   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.521480   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521480   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.521682   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.521722   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.521844   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.521845   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.521997   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:30.522149   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.522424   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:30.632970   44923 ssh_runner.go:195] Run: systemctl --version
	I0130 20:39:30.638936   44923 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:39:30.784288   44923 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:39:30.792079   44923 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:39:30.792150   44923 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:39:30.809394   44923 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:39:30.809421   44923 start.go:475] detecting cgroup driver to use...
	I0130 20:39:30.809496   44923 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:39:30.824383   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:39:30.838710   44923 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:39:30.838765   44923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:39:30.852928   44923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:39:30.867162   44923 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:39:30.995737   44923 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:39:31.113661   44923 docker.go:233] disabling docker service ...
	I0130 20:39:31.113726   44923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:39:31.127737   44923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:39:31.139320   44923 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:39:31.240000   44923 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:39:31.340063   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:39:31.353303   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:39:31.371834   44923 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:39:31.371889   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.382579   44923 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:39:31.382639   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.392544   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.403023   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.413288   44923 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:39:31.423806   44923 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:39:31.433817   44923 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:39:31.433884   44923 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:39:31.447456   44923 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:39:31.457035   44923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:39:31.562847   44923 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:39:31.752772   44923 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:39:31.752844   44923 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:39:31.757880   44923 start.go:543] Will wait 60s for crictl version
	I0130 20:39:31.757943   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:31.761967   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:39:31.800658   44923 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:39:31.800725   44923 ssh_runner.go:195] Run: crio --version
	I0130 20:39:31.852386   44923 ssh_runner.go:195] Run: crio --version
	I0130 20:39:31.910758   44923 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0130 20:39:30.791795   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:33.292307   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:34.079616   45819 api_server.go:269] stopped: https://192.168.39.16:8443/healthz: Get "https://192.168.39.16:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0130 20:39:34.079674   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:31.516571   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:33.517547   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:31.912241   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:31.915377   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:31.915705   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:31.915735   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:31.915985   44923 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0130 20:39:31.920870   44923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:31.934252   44923 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 20:39:31.934304   44923 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:39:31.975687   44923 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0130 20:39:31.975714   44923 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 20:39:31.975762   44923 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:31.975874   44923 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:31.975900   44923 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:31.975936   44923 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0130 20:39:31.975959   44923 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:31.975903   44923 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:31.976051   44923 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:31.976063   44923 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:31.977466   44923 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:31.977485   44923 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:31.977525   44923 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0130 20:39:31.977531   44923 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:31.977569   44923 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:31.977559   44923 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:31.977663   44923 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:31.977812   44923 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:32.130396   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0130 20:39:32.132105   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:32.135297   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:32.135817   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:32.136698   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:32.154928   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:32.172264   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:32.355420   44923 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0130 20:39:32.355504   44923 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:32.355537   44923 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0130 20:39:32.355580   44923 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:32.355454   44923 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0130 20:39:32.355636   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355675   44923 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:32.355606   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355724   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355763   44923 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0130 20:39:32.355803   44923 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:32.355844   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355855   44923 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0130 20:39:32.355884   44923 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:32.355805   44923 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0130 20:39:32.355928   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355929   44923 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:32.355974   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.360081   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:32.370164   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:32.370202   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:32.370243   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:32.370259   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:32.370374   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:32.466609   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.466714   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.503174   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:32.503293   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:32.507888   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0130 20:39:32.507963   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0130 20:39:32.508061   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0130 20:39:32.508061   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0130 20:39:32.518772   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:32.518883   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0130 20:39:32.518906   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.518932   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:32.518951   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.518824   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0130 20:39:32.518996   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0130 20:39:32.519041   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 20:39:32.521450   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0130 20:39:32.521493   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0130 20:39:32.848844   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:34.579929   44923 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.060972543s)
	I0130 20:39:34.579971   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0130 20:39:34.580001   44923 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.060936502s)
	I0130 20:39:34.580034   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0130 20:39:34.580045   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.061073363s)
	I0130 20:39:34.580059   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0130 20:39:34.580082   44923 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.731208309s)
	I0130 20:39:34.580132   44923 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0130 20:39:34.580088   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:34.580225   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:34.580173   44923 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:34.580343   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:34.585311   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:34.796586   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:34.796615   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:34.796633   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:34.846035   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:34.846071   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:35.079544   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:35.091673   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 20:39:35.091710   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 20:39:35.579233   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:35.587045   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 20:39:35.587071   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 20:39:36.079775   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:36.086927   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0130 20:39:36.095953   45819 api_server.go:141] control plane version: v1.16.0
	I0130 20:39:36.095976   45819 api_server.go:131] duration metric: took 7.517178171s to wait for apiserver health ...
	I0130 20:39:36.095985   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:39:36.095992   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:36.097742   45819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:39:35.792385   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:37.792648   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:36.099012   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:39:36.108427   45819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:39:36.126083   45819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:39:36.138855   45819 system_pods.go:59] 8 kube-system pods found
	I0130 20:39:36.138882   45819 system_pods.go:61] "coredns-5644d7b6d9-547k4" [6b1119fe-9c8a-44fb-b813-58271228b290] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:39:36.138888   45819 system_pods.go:61] "coredns-5644d7b6d9-dtfzh" [4cbd4f36-bc01-4f55-ba50-b7dcdcb35b9b] Running
	I0130 20:39:36.138894   45819 system_pods.go:61] "etcd-old-k8s-version-150971" [22eeed2c-7454-4b9d-8b4d-1c9a2e5feaf7] Running
	I0130 20:39:36.138899   45819 system_pods.go:61] "kube-apiserver-old-k8s-version-150971" [5ef062e6-2f78-485f-9420-e8714128e39f] Running
	I0130 20:39:36.138903   45819 system_pods.go:61] "kube-controller-manager-old-k8s-version-150971" [4e5df6df-486e-47a8-89b8-8d962212ec3e] Running
	I0130 20:39:36.138907   45819 system_pods.go:61] "kube-proxy-ncl7z" [51b28456-0070-46fc-b647-e28d6bdcfde2] Running
	I0130 20:39:36.138914   45819 system_pods.go:61] "kube-scheduler-old-k8s-version-150971" [384c4dfa-180b-4ec3-9e12-3c6d9e581c0e] Running
	I0130 20:39:36.138918   45819 system_pods.go:61] "storage-provisioner" [8a75a04c-1b80-41f6-9dfd-a7ee6f908b9d] Running
	I0130 20:39:36.138928   45819 system_pods.go:74] duration metric: took 12.820934ms to wait for pod list to return data ...
	I0130 20:39:36.138936   45819 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:39:36.142193   45819 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:39:36.142224   45819 node_conditions.go:123] node cpu capacity is 2
	I0130 20:39:36.142236   45819 node_conditions.go:105] duration metric: took 3.295582ms to run NodePressure ...
	I0130 20:39:36.142256   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:36.480656   45819 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:39:36.486153   45819 retry.go:31] will retry after 323.854639ms: kubelet not initialised
	I0130 20:39:36.816707   45819 retry.go:31] will retry after 303.422684ms: kubelet not initialised
	I0130 20:39:37.125369   45819 retry.go:31] will retry after 697.529029ms: kubelet not initialised
	I0130 20:39:37.829322   45819 retry.go:31] will retry after 626.989047ms: kubelet not initialised
	I0130 20:39:38.463635   45819 retry.go:31] will retry after 1.390069174s: kubelet not initialised
	I0130 20:39:35.519218   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:38.013652   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:40.014621   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:37.168054   44923 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.582708254s)
	I0130 20:39:37.168111   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0130 20:39:37.168188   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.587929389s)
	I0130 20:39:37.168204   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0130 20:39:37.168226   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0130 20:39:37.168257   44923 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0130 20:39:37.168330   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0130 20:39:37.173865   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0130 20:39:39.259662   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.091304493s)
	I0130 20:39:39.259692   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0130 20:39:39.259719   44923 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0130 20:39:39.259777   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0130 20:39:40.291441   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:42.292550   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:39.861179   45819 retry.go:31] will retry after 1.194254513s: kubelet not initialised
	I0130 20:39:41.067315   45819 retry.go:31] will retry after 3.766341089s: kubelet not initialised
	I0130 20:39:42.016919   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:44.514681   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:43.804203   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.54440472s)
	I0130 20:39:43.804228   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0130 20:39:43.804262   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:43.804360   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:44.790577   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:46.791751   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:44.839501   45819 retry.go:31] will retry after 2.957753887s: kubelet not initialised
	I0130 20:39:47.802749   45819 retry.go:31] will retry after 4.750837771s: kubelet not initialised
	I0130 20:39:47.016112   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:49.517716   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:46.385349   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.580960989s)
	I0130 20:39:46.385378   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0130 20:39:46.385403   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 20:39:46.385446   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 20:39:48.570468   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.184994355s)
	I0130 20:39:48.570504   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0130 20:39:48.570529   44923 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0130 20:39:48.570575   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0130 20:39:49.318398   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0130 20:39:49.318449   44923 cache_images.go:123] Successfully loaded all cached images
	I0130 20:39:49.318457   44923 cache_images.go:92] LoadImages completed in 17.342728639s
	I0130 20:39:49.318542   44923 ssh_runner.go:195] Run: crio config
	I0130 20:39:49.393074   44923 cni.go:84] Creating CNI manager for ""
	I0130 20:39:49.393094   44923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:49.393116   44923 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:39:49.393143   44923 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.220 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-473743 NodeName:no-preload-473743 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:39:49.393301   44923 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-473743"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:39:49.393384   44923 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-473743 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-473743 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:39:49.393445   44923 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0130 20:39:49.403506   44923 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:39:49.403582   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:39:49.412473   44923 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0130 20:39:49.429600   44923 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0130 20:39:49.445613   44923 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0130 20:39:49.462906   44923 ssh_runner.go:195] Run: grep 192.168.50.220	control-plane.minikube.internal$ /etc/hosts
	I0130 20:39:49.466844   44923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:49.479363   44923 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743 for IP: 192.168.50.220
	I0130 20:39:49.479388   44923 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:39:49.479540   44923 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:39:49.479599   44923 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:39:49.479682   44923 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.key
	I0130 20:39:49.479766   44923 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/apiserver.key.ef9da43a
	I0130 20:39:49.479832   44923 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/proxy-client.key
	I0130 20:39:49.479984   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:39:49.480020   44923 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:39:49.480031   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:39:49.480052   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:39:49.480082   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:39:49.480104   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:39:49.480148   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:49.480782   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:39:49.504588   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 20:39:49.530340   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:39:49.552867   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 20:39:49.575974   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:39:49.598538   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:39:49.623489   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:39:49.646965   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:39:49.671998   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:39:49.695493   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:39:49.718975   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:39:49.741793   44923 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:39:49.758291   44923 ssh_runner.go:195] Run: openssl version
	I0130 20:39:49.765053   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:39:49.775428   44923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:39:49.780081   44923 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:39:49.780130   44923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:39:49.785510   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:39:49.797983   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:39:49.807934   44923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:39:49.812367   44923 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:39:49.812416   44923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:39:49.818021   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:39:49.827603   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:39:49.837248   44923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:49.841789   44923 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:49.841838   44923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:49.847684   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:39:49.857387   44923 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:39:49.862411   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:39:49.871862   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:39:49.877904   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:39:49.883820   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:39:49.890534   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:39:49.898143   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:39:49.905607   44923 kubeadm.go:404] StartCluster: {Name:no-preload-473743 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-473743 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.220 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:39:49.905713   44923 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:39:49.905768   44923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:49.956631   44923 cri.go:89] found id: ""
	I0130 20:39:49.956705   44923 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:39:49.967500   44923 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:39:49.967516   44923 kubeadm.go:636] restartCluster start
	I0130 20:39:49.967572   44923 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:39:49.977077   44923 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:49.978191   44923 kubeconfig.go:92] found "no-preload-473743" server: "https://192.168.50.220:8443"
	I0130 20:39:49.980732   44923 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:39:49.990334   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:49.990377   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:50.001427   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:50.491017   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:50.491080   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:50.503162   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:48.792438   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:51.290002   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:53.291511   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:52.558586   45819 retry.go:31] will retry after 13.209460747s: kubelet not initialised
	I0130 20:39:52.013659   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:54.013756   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:50.991212   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:50.991312   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:51.004155   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:51.491296   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:51.491369   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:51.502771   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:51.991398   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:51.991529   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:52.004164   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:52.490700   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:52.490817   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:52.504616   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:52.991009   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:52.991101   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:53.004178   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:53.490804   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:53.490897   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:53.502856   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:53.990345   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:53.990451   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:54.003812   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:54.491414   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:54.491522   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:54.502969   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:54.991126   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:54.991212   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:55.003001   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:55.490521   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:55.490609   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:55.501901   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:55.791198   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:58.289750   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:56.513098   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:58.514459   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:55.990820   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:55.990893   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:56.002224   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:56.490338   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:56.490432   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:56.502497   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:56.991097   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:56.991189   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:57.002115   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:57.490604   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:57.490681   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:57.501777   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:57.991320   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:57.991419   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:58.002136   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:58.490641   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:58.490724   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:58.502247   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:58.990830   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:58.990951   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:59.001469   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:59.491109   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:59.491223   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:59.502348   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:59.991097   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:59.991182   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:40:00.002945   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:40:00.002978   44923 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:40:00.002986   44923 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:40:00.002996   44923 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:40:00.003068   44923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:40:00.045168   44923 cri.go:89] found id: ""
	I0130 20:40:00.045245   44923 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:40:00.061704   44923 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:40:00.074448   44923 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:40:00.074505   44923 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:40:00.083478   44923 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:40:00.083502   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:00.200934   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:00.791680   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:02.791880   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:00.515342   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:02.515914   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:05.014585   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:00.824616   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:01.029317   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:01.146596   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:01.232362   44923 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:40:01.232439   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:01.733118   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:02.232964   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:02.732910   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:03.232934   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:03.732852   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:03.758730   44923 api_server.go:72] duration metric: took 2.526367424s to wait for apiserver process to appear ...
	I0130 20:40:03.758768   44923 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:40:03.758786   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:05.290228   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:07.290842   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:07.869847   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:40:07.869897   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:40:07.869919   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:07.986795   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:40:07.986841   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:40:08.259140   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:08.265487   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:40:08.265523   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:40:08.759024   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:08.764138   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:40:08.764163   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:40:09.259821   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:09.265120   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 200:
	ok
	I0130 20:40:09.275933   44923 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 20:40:09.275956   44923 api_server.go:131] duration metric: took 5.517181599s to wait for apiserver health ...
	I0130 20:40:09.275965   44923 cni.go:84] Creating CNI manager for ""
	I0130 20:40:09.275971   44923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:40:09.277688   44923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:40:05.773670   45819 retry.go:31] will retry after 17.341210204s: kubelet not initialised
	I0130 20:40:07.014677   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:09.516836   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:09.279058   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:40:09.307862   44923 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:40:09.339259   44923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:40:09.355136   44923 system_pods.go:59] 8 kube-system pods found
	I0130 20:40:09.355177   44923 system_pods.go:61] "coredns-76f75df574-d4c7t" [a8701b4d-0616-4c05-9ba0-0157adae2d13] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:40:09.355185   44923 system_pods.go:61] "etcd-no-preload-473743" [ed931ab3-95d8-4115-ae97-1c274ed8432d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 20:40:09.355194   44923 system_pods.go:61] "kube-apiserver-no-preload-473743" [64b9b17c-6df5-41db-a308-b0deba016c9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 20:40:09.355201   44923 system_pods.go:61] "kube-controller-manager-no-preload-473743" [a28d8dc6-244a-4dfa-9d7f-468281823332] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 20:40:09.355210   44923 system_pods.go:61] "kube-proxy-zklzt" [fa94d19c-b0d6-4e78-86e8-e6b5f3608753] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 20:40:09.355219   44923 system_pods.go:61] "kube-scheduler-no-preload-473743" [b8f8066b-8644-42c3-b47a-52e34210e410] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 20:40:09.355238   44923 system_pods.go:61] "metrics-server-57f55c9bc5-wzb2g" [cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:40:09.355249   44923 system_pods.go:61] "storage-provisioner" [a257b079-cb6e-45fd-b05d-9ad6fa26225e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:40:09.355256   44923 system_pods.go:74] duration metric: took 15.951624ms to wait for pod list to return data ...
	I0130 20:40:09.355277   44923 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:40:09.361985   44923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:40:09.362014   44923 node_conditions.go:123] node cpu capacity is 2
	I0130 20:40:09.362025   44923 node_conditions.go:105] duration metric: took 6.74245ms to run NodePressure ...
	I0130 20:40:09.362045   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:09.678111   44923 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:40:09.687808   44923 kubeadm.go:787] kubelet initialised
	I0130 20:40:09.687828   44923 kubeadm.go:788] duration metric: took 9.689086ms waiting for restarted kubelet to initialise ...
	I0130 20:40:09.687835   44923 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:09.694574   44923 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.700190   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "coredns-76f75df574-d4c7t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.700214   44923 pod_ready.go:81] duration metric: took 5.613522ms waiting for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.700230   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "coredns-76f75df574-d4c7t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.700237   44923 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.705513   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "etcd-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.705534   44923 pod_ready.go:81] duration metric: took 5.286859ms waiting for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.705545   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "etcd-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.705553   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.710360   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-apiserver-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.710378   44923 pod_ready.go:81] duration metric: took 4.814631ms waiting for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.710388   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-apiserver-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.710396   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.746412   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.746447   44923 pod_ready.go:81] duration metric: took 36.037006ms waiting for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.746460   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.746469   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:10.143330   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-proxy-zklzt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.143364   44923 pod_ready.go:81] duration metric: took 396.879081ms waiting for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:10.143377   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-proxy-zklzt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.143385   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:10.549132   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-scheduler-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.549171   44923 pod_ready.go:81] duration metric: took 405.77755ms waiting for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:10.549192   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-scheduler-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.549201   44923 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:10.942777   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.942802   44923 pod_ready.go:81] duration metric: took 393.589996ms waiting for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:10.942811   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.942817   44923 pod_ready.go:38] duration metric: took 1.254975084s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:10.942834   44923 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:40:10.954894   44923 ops.go:34] apiserver oom_adj: -16
	I0130 20:40:10.954916   44923 kubeadm.go:640] restartCluster took 20.987393757s
	I0130 20:40:10.954926   44923 kubeadm.go:406] StartCluster complete in 21.049328159s
	I0130 20:40:10.954944   44923 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:40:10.955025   44923 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:40:10.956906   44923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:40:10.957249   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:40:10.957343   44923 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:40:10.957411   44923 addons.go:69] Setting storage-provisioner=true in profile "no-preload-473743"
	I0130 20:40:10.957434   44923 addons.go:234] Setting addon storage-provisioner=true in "no-preload-473743"
	I0130 20:40:10.957440   44923 addons.go:69] Setting metrics-server=true in profile "no-preload-473743"
	I0130 20:40:10.957447   44923 config.go:182] Loaded profile config "no-preload-473743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	W0130 20:40:10.957451   44923 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:40:10.957471   44923 addons.go:234] Setting addon metrics-server=true in "no-preload-473743"
	W0130 20:40:10.957481   44923 addons.go:243] addon metrics-server should already be in state true
	I0130 20:40:10.957512   44923 host.go:66] Checking if "no-preload-473743" exists ...
	I0130 20:40:10.957522   44923 host.go:66] Checking if "no-preload-473743" exists ...
	I0130 20:40:10.957946   44923 addons.go:69] Setting default-storageclass=true in profile "no-preload-473743"
	I0130 20:40:10.957911   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.958230   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.958246   44923 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-473743"
	I0130 20:40:10.958477   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.958517   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.958600   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.958621   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.962458   44923 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-473743" context rescaled to 1 replicas
	I0130 20:40:10.962497   44923 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.220 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:40:10.964710   44923 out.go:177] * Verifying Kubernetes components...
	I0130 20:40:10.966259   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:40:10.975195   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45125
	I0130 20:40:10.975661   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.976231   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.976262   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.976885   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.977509   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.977542   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.978199   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37815
	I0130 20:40:10.978220   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35309
	I0130 20:40:10.979039   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.979106   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.979581   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.979600   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.979584   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.979663   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.979964   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.980074   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.980160   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:10.980655   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.980690   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.984068   44923 addons.go:234] Setting addon default-storageclass=true in "no-preload-473743"
	W0130 20:40:10.984119   44923 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:40:10.984155   44923 host.go:66] Checking if "no-preload-473743" exists ...
	I0130 20:40:10.984564   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.984615   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.997126   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44921
	I0130 20:40:10.997598   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.997990   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.998006   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.998355   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.998520   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:10.998838   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37151
	I0130 20:40:10.999186   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.999589   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.999604   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:11.000003   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:11.000289   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:11.000433   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:40:11.002723   44923 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:40:11.001789   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:40:11.004317   44923 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:40:11.004329   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:40:11.004345   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:40:11.005791   44923 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:40:11.007234   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:40:11.007246   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:40:11.007259   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:40:11.006415   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I0130 20:40:11.007375   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.007826   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:11.008219   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:40:11.008258   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.008405   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:40:11.008550   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:11.008566   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:11.008624   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:40:11.008780   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:40:11.008900   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:11.008904   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:40:11.009548   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:11.009578   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:11.010414   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.010713   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:40:11.010733   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.010938   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:40:11.011137   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:40:11.011308   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:40:11.011424   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:40:11.047889   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44097
	I0130 20:40:11.048317   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:11.048800   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:11.048820   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:11.049210   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:11.049451   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:11.051439   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:40:11.052012   44923 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:40:11.052030   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:40:11.052049   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:40:11.055336   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.055865   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:40:11.055888   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.055976   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:40:11.056175   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:40:11.056344   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:40:11.056461   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:40:11.176670   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:40:11.176694   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:40:11.182136   44923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:40:11.194238   44923 node_ready.go:35] waiting up to 6m0s for node "no-preload-473743" to be "Ready" ...
	I0130 20:40:11.194301   44923 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0130 20:40:11.213877   44923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:40:11.222566   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:40:11.222591   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:40:11.264089   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:40:11.264119   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:40:11.337758   44923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:40:12.237415   44923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.055244284s)
	I0130 20:40:12.237483   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.237482   44923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.023570997s)
	I0130 20:40:12.237504   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.237521   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.237538   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.237867   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.237927   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.237949   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.237964   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.237973   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.237986   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.238018   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.238030   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.238303   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.238319   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.238415   44923 main.go:141] libmachine: (no-preload-473743) DBG | Closing plugin on server side
	I0130 20:40:12.238473   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.238485   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.245407   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.245432   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.245653   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.245670   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.287632   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.287660   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.287973   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.287998   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.288000   44923 main.go:141] libmachine: (no-preload-473743) DBG | Closing plugin on server side
	I0130 20:40:12.288014   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.288024   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.288266   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.288286   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.288297   44923 addons.go:470] Verifying addon metrics-server=true in "no-preload-473743"
	I0130 20:40:12.288352   44923 main.go:141] libmachine: (no-preload-473743) DBG | Closing plugin on server side
	I0130 20:40:12.290298   44923 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 20:40:09.291762   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:11.791994   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:12.016265   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:14.515097   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:12.291867   44923 addons.go:505] enable addons completed in 1.334521495s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 20:40:13.200767   44923 node_ready.go:58] node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:15.699345   44923 node_ready.go:58] node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:14.291583   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:16.292248   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:17.014332   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:19.014556   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:18.198624   44923 node_ready.go:58] node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:18.699015   44923 node_ready.go:49] node "no-preload-473743" has status "Ready":"True"
	I0130 20:40:18.699041   44923 node_ready.go:38] duration metric: took 7.504770144s waiting for node "no-preload-473743" to be "Ready" ...
	I0130 20:40:18.699050   44923 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:18.709647   44923 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.718022   44923 pod_ready.go:92] pod "coredns-76f75df574-d4c7t" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:18.718046   44923 pod_ready.go:81] duration metric: took 8.370541ms waiting for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.718054   44923 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.722992   44923 pod_ready.go:92] pod "etcd-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:18.723012   44923 pod_ready.go:81] duration metric: took 4.951762ms waiting for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.723020   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:20.732288   44923 pod_ready.go:102] pod "kube-apiserver-no-preload-473743" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:18.791445   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:21.290205   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:23.123817   45819 kubeadm.go:787] kubelet initialised
	I0130 20:40:23.123842   45819 kubeadm.go:788] duration metric: took 46.643164333s waiting for restarted kubelet to initialise ...
	I0130 20:40:23.123849   45819 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:23.128282   45819 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-547k4" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.132665   45819 pod_ready.go:92] pod "coredns-5644d7b6d9-547k4" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.132688   45819 pod_ready.go:81] duration metric: took 4.375362ms waiting for pod "coredns-5644d7b6d9-547k4" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.132701   45819 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-dtfzh" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.137072   45819 pod_ready.go:92] pod "coredns-5644d7b6d9-dtfzh" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.137092   45819 pod_ready.go:81] duration metric: took 4.379467ms waiting for pod "coredns-5644d7b6d9-dtfzh" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.137102   45819 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.142038   45819 pod_ready.go:92] pod "etcd-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.142058   45819 pod_ready.go:81] duration metric: took 4.949104ms waiting for pod "etcd-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.142070   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.146657   45819 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.146676   45819 pod_ready.go:81] duration metric: took 4.598238ms waiting for pod "kube-apiserver-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.146686   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.518159   45819 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.518189   45819 pod_ready.go:81] duration metric: took 371.488133ms waiting for pod "kube-controller-manager-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.518203   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ncl7z" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.919594   45819 pod_ready.go:92] pod "kube-proxy-ncl7z" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.919628   45819 pod_ready.go:81] duration metric: took 401.417322ms waiting for pod "kube-proxy-ncl7z" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.919644   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:24.318125   45819 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:24.318152   45819 pod_ready.go:81] duration metric: took 398.499457ms waiting for pod "kube-scheduler-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:24.318166   45819 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.513600   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:23.514060   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:21.233466   44923 pod_ready.go:92] pod "kube-apiserver-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:21.233494   44923 pod_ready.go:81] duration metric: took 2.510466903s waiting for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.233507   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.240688   44923 pod_ready.go:92] pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:21.240709   44923 pod_ready.go:81] duration metric: took 7.194165ms waiting for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.240721   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.248250   44923 pod_ready.go:92] pod "kube-proxy-zklzt" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:21.248271   44923 pod_ready.go:81] duration metric: took 7.542304ms waiting for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.248278   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.256673   44923 pod_ready.go:92] pod "kube-scheduler-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.256700   44923 pod_ready.go:81] duration metric: took 2.008414366s waiting for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.256712   44923 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:25.263480   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:23.790334   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:26.290232   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:28.292270   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:26.324649   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:28.825120   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:26.016305   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:28.513650   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:27.264434   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:29.764240   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:30.793210   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:33.292255   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:31.326850   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:33.824698   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:30.514448   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:32.518435   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:35.013676   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:32.264144   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:34.763689   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:35.789964   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:37.791095   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:35.825018   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:38.326094   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:37.014222   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:39.517868   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:37.265137   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:39.764115   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:40.290332   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.290850   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:40.327135   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.824370   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.014917   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.516872   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.264387   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.265504   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.291131   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:46.790512   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.827108   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:47.327816   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:46.518922   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:49.014136   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:46.765151   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:49.265178   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:48.790952   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:51.291730   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:49.824442   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:52.325401   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:51.014513   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:53.518388   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:51.266567   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:53.764501   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:53.789915   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:55.790332   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:57.791445   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:54.825612   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:57.324364   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:59.327308   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:56.020804   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:58.515544   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:56.263707   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:58.264200   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:00.264261   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:59.792066   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:02.289879   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:01.824631   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:03.824749   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:01.014649   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:03.014805   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:05.017318   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:02.763825   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:04.764040   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:04.290927   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:06.791853   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:06.326570   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:08.824889   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:07.516190   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:10.018532   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:06.765257   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:09.263466   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:09.290744   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:11.791416   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:10.825025   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:13.324947   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:12.514850   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:14.522700   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:11.263911   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:13.763429   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:15.766371   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:14.289786   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:16.291753   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:15.325297   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:17.824762   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:17.014087   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:19.518139   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:18.263727   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:20.263854   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:18.791517   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:21.292155   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:19.825751   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:22.324733   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:21.518205   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:24.015562   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:22.767815   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:25.263283   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:23.790847   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:26.290464   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:24.824063   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:26.825938   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:29.325683   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:26.016724   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:28.514670   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:27.264429   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:29.264577   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:28.791861   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:31.291558   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:31.824367   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.824771   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:30.515432   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.014091   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:31.265902   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.764211   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:35.764788   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.791968   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:36.290991   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:38.291383   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:35.824891   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:37.825500   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:35.514120   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:37.514579   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:39.516165   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:37.765006   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:40.263816   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:40.791224   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.792487   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:40.326148   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.825282   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.014531   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:44.514337   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.264845   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:44.764275   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:45.290370   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:47.790557   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:45.325184   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:47.825091   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:46.515035   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:49.013829   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:47.263752   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:49.263882   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:49.790715   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:52.291348   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:50.326963   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:52.825278   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:51.014381   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:53.016755   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:51.264167   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:53.264888   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:55.265000   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:54.291846   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:56.790351   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:55.325156   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:57.325446   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:59.326114   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:55.515866   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:58.013768   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:00.014052   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:57.763548   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:59.764374   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:58.790584   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:01.294420   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:01.827046   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:04.325425   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:02.514100   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:04.516981   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:02.264420   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:04.264851   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:03.790918   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:06.290560   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:08.291334   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:06.824232   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:08.824527   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:07.014375   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:09.513980   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:06.764222   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:09.264299   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:10.292477   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:12.795626   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:10.825706   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:13.325572   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:11.514369   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:14.016090   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:11.264881   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:13.763625   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:15.764616   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:15.290292   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:17.790263   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:15.326185   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:17.826504   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:16.518263   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:19.014219   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:18.265723   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:20.764663   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:19.792068   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:22.292221   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:20.325069   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:22.326307   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:21.014811   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:23.014876   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:25.017016   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:23.264098   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:25.267065   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:24.791616   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:27.291739   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:24.825416   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:26.826380   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:29.325717   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:27.513692   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:30.015246   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:27.763938   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:29.764135   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:29.789997   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:31.790272   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:31.825466   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:33.826959   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:32.513718   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:35.014948   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:31.780185   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:34.265062   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:33.790477   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:36.290139   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:38.291801   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:36.325475   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:38.825210   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:37.513778   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:39.518155   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:36.764137   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:38.765005   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:40.790050   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:42.791739   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:41.325239   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:43.826300   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:42.013844   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:44.014396   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:41.268687   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:43.765101   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:45.290120   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:47.291365   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:46.325321   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:48.824944   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:46.015721   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:48.514689   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:46.269498   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:48.763780   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:50.765289   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:49.790212   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:52.291090   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:51.324622   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:53.324873   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:51.015934   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:53.016171   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:52.765777   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:55.264419   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:54.292666   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:56.790098   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:55.825230   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:58.324546   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:55.514240   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:58.014796   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:57.764094   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:59.764594   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:58.790445   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:00.790844   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:03.290632   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:00.325916   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:02.824174   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:00.514203   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:02.515317   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:05.018840   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:01.767672   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:04.263736   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:04.290221   45037 pod_ready.go:81] duration metric: took 4m0.006974938s waiting for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	E0130 20:43:04.290244   45037 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 20:43:04.290252   45037 pod_ready.go:38] duration metric: took 4m4.550384705s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:43:04.290265   45037 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:43:04.290289   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:43:04.290330   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:43:04.354567   45037 cri.go:89] found id: "f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:04.354594   45037 cri.go:89] found id: ""
	I0130 20:43:04.354603   45037 logs.go:276] 1 containers: [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d]
	I0130 20:43:04.354664   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.359890   45037 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:43:04.359961   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:43:04.399415   45037 cri.go:89] found id: "0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:04.399437   45037 cri.go:89] found id: ""
	I0130 20:43:04.399444   45037 logs.go:276] 1 containers: [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18]
	I0130 20:43:04.399484   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.404186   45037 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:43:04.404241   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:43:04.445968   45037 cri.go:89] found id: "4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:04.445994   45037 cri.go:89] found id: ""
	I0130 20:43:04.446003   45037 logs.go:276] 1 containers: [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d]
	I0130 20:43:04.446060   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.450215   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:43:04.450285   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:43:04.492438   45037 cri.go:89] found id: "74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:04.492462   45037 cri.go:89] found id: ""
	I0130 20:43:04.492476   45037 logs.go:276] 1 containers: [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f]
	I0130 20:43:04.492537   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.497227   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:43:04.497301   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:43:04.535936   45037 cri.go:89] found id: "cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:04.535960   45037 cri.go:89] found id: ""
	I0130 20:43:04.535970   45037 logs.go:276] 1 containers: [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254]
	I0130 20:43:04.536026   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.540968   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:43:04.541046   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:43:04.584192   45037 cri.go:89] found id: "b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:04.584214   45037 cri.go:89] found id: ""
	I0130 20:43:04.584222   45037 logs.go:276] 1 containers: [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2]
	I0130 20:43:04.584280   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.588842   45037 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:43:04.588914   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:43:04.630957   45037 cri.go:89] found id: ""
	I0130 20:43:04.630984   45037 logs.go:276] 0 containers: []
	W0130 20:43:04.630994   45037 logs.go:278] No container was found matching "kindnet"
	I0130 20:43:04.631000   45037 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:43:04.631057   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:43:04.672712   45037 cri.go:89] found id: "84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:04.672741   45037 cri.go:89] found id: "5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:04.672747   45037 cri.go:89] found id: ""
	I0130 20:43:04.672757   45037 logs.go:276] 2 containers: [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5]
	I0130 20:43:04.672830   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.677537   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.681806   45037 logs.go:123] Gathering logs for kube-scheduler [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f] ...
	I0130 20:43:04.681833   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:04.743389   45037 logs.go:123] Gathering logs for kube-proxy [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254] ...
	I0130 20:43:04.743420   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:04.783857   45037 logs.go:123] Gathering logs for etcd [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18] ...
	I0130 20:43:04.783891   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:04.838800   45037 logs.go:123] Gathering logs for container status ...
	I0130 20:43:04.838827   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:43:04.897337   45037 logs.go:123] Gathering logs for kubelet ...
	I0130 20:43:04.897361   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:43:04.954337   45037 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:43:04.954367   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:43:05.110447   45037 logs.go:123] Gathering logs for kube-controller-manager [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2] ...
	I0130 20:43:05.110476   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:05.169238   45037 logs.go:123] Gathering logs for storage-provisioner [5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5] ...
	I0130 20:43:05.169275   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:05.209860   45037 logs.go:123] Gathering logs for dmesg ...
	I0130 20:43:05.209890   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:43:05.224272   45037 logs.go:123] Gathering logs for coredns [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d] ...
	I0130 20:43:05.224296   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:05.264818   45037 logs.go:123] Gathering logs for storage-provisioner [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac] ...
	I0130 20:43:05.264857   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:05.304626   45037 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:43:05.304657   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:43:05.748336   45037 logs.go:123] Gathering logs for kube-apiserver [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d] ...
	I0130 20:43:05.748377   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:08.306639   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:43:08.324001   45037 api_server.go:72] duration metric: took 4m16.400279002s to wait for apiserver process to appear ...
	I0130 20:43:08.324028   45037 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:43:08.324061   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:43:08.324111   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:43:08.364000   45037 cri.go:89] found id: "f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:08.364026   45037 cri.go:89] found id: ""
	I0130 20:43:08.364036   45037 logs.go:276] 1 containers: [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d]
	I0130 20:43:08.364093   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.368770   45037 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:43:08.368843   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:43:08.411371   45037 cri.go:89] found id: "0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:08.411394   45037 cri.go:89] found id: ""
	I0130 20:43:08.411404   45037 logs.go:276] 1 containers: [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18]
	I0130 20:43:08.411462   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.415582   45037 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:43:08.415648   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:43:08.455571   45037 cri.go:89] found id: "4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:08.455601   45037 cri.go:89] found id: ""
	I0130 20:43:08.455612   45037 logs.go:276] 1 containers: [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d]
	I0130 20:43:08.455662   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.459908   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:43:08.459972   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:43:08.497350   45037 cri.go:89] found id: "74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:08.497374   45037 cri.go:89] found id: ""
	I0130 20:43:08.497383   45037 logs.go:276] 1 containers: [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f]
	I0130 20:43:08.497441   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.501504   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:43:08.501552   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:43:08.550031   45037 cri.go:89] found id: "cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:08.550057   45037 cri.go:89] found id: ""
	I0130 20:43:08.550066   45037 logs.go:276] 1 containers: [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254]
	I0130 20:43:08.550181   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.555166   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:43:08.555215   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:43:08.590903   45037 cri.go:89] found id: "b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:08.590929   45037 cri.go:89] found id: ""
	I0130 20:43:08.590939   45037 logs.go:276] 1 containers: [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2]
	I0130 20:43:08.590997   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.594837   45037 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:43:08.594888   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:43:08.630989   45037 cri.go:89] found id: ""
	I0130 20:43:08.631015   45037 logs.go:276] 0 containers: []
	W0130 20:43:08.631024   45037 logs.go:278] No container was found matching "kindnet"
	I0130 20:43:08.631029   45037 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:43:08.631072   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:43:08.669579   45037 cri.go:89] found id: "84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:08.669603   45037 cri.go:89] found id: "5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:08.669609   45037 cri.go:89] found id: ""
	I0130 20:43:08.669617   45037 logs.go:276] 2 containers: [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5]
	I0130 20:43:08.669666   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.673938   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.677733   45037 logs.go:123] Gathering logs for kubelet ...
	I0130 20:43:08.677757   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:43:08.726492   45037 logs.go:123] Gathering logs for dmesg ...
	I0130 20:43:08.726519   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:43:04.825623   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:07.331997   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:07.514074   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:09.514484   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:06.264040   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:08.264505   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:10.764072   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:08.740624   45037 logs.go:123] Gathering logs for kube-controller-manager [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2] ...
	I0130 20:43:08.740645   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:08.792517   45037 logs.go:123] Gathering logs for kube-scheduler [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f] ...
	I0130 20:43:08.792547   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:08.829131   45037 logs.go:123] Gathering logs for storage-provisioner [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac] ...
	I0130 20:43:08.829166   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:08.870777   45037 logs.go:123] Gathering logs for storage-provisioner [5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5] ...
	I0130 20:43:08.870802   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:08.909648   45037 logs.go:123] Gathering logs for kube-apiserver [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d] ...
	I0130 20:43:08.909678   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:08.953671   45037 logs.go:123] Gathering logs for coredns [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d] ...
	I0130 20:43:08.953701   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:08.989624   45037 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:43:08.989648   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:43:09.383141   45037 logs.go:123] Gathering logs for container status ...
	I0130 20:43:09.383174   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:43:09.442685   45037 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:43:09.442719   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:43:09.563370   45037 logs.go:123] Gathering logs for etcd [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18] ...
	I0130 20:43:09.563398   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:09.614390   45037 logs.go:123] Gathering logs for kube-proxy [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254] ...
	I0130 20:43:09.614422   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:12.156906   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:43:12.161980   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 200:
	ok
	I0130 20:43:12.163284   45037 api_server.go:141] control plane version: v1.28.4
	I0130 20:43:12.163308   45037 api_server.go:131] duration metric: took 3.839271753s to wait for apiserver health ...
	I0130 20:43:12.163318   45037 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:43:12.163343   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:43:12.163389   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:43:12.207351   45037 cri.go:89] found id: "f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:12.207372   45037 cri.go:89] found id: ""
	I0130 20:43:12.207381   45037 logs.go:276] 1 containers: [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d]
	I0130 20:43:12.207436   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.213923   45037 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:43:12.213987   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:43:12.263647   45037 cri.go:89] found id: "0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:12.263680   45037 cri.go:89] found id: ""
	I0130 20:43:12.263690   45037 logs.go:276] 1 containers: [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18]
	I0130 20:43:12.263743   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.268327   45037 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:43:12.268381   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:43:12.310594   45037 cri.go:89] found id: "4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:12.310614   45037 cri.go:89] found id: ""
	I0130 20:43:12.310622   45037 logs.go:276] 1 containers: [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d]
	I0130 20:43:12.310670   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.315134   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:43:12.315185   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:43:12.359384   45037 cri.go:89] found id: "74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:12.359404   45037 cri.go:89] found id: ""
	I0130 20:43:12.359412   45037 logs.go:276] 1 containers: [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f]
	I0130 20:43:12.359468   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.363796   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:43:12.363856   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:43:12.399741   45037 cri.go:89] found id: "cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:12.399771   45037 cri.go:89] found id: ""
	I0130 20:43:12.399783   45037 logs.go:276] 1 containers: [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254]
	I0130 20:43:12.399844   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.404237   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:43:12.404302   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:43:12.457772   45037 cri.go:89] found id: "b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:12.457806   45037 cri.go:89] found id: ""
	I0130 20:43:12.457816   45037 logs.go:276] 1 containers: [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2]
	I0130 20:43:12.457876   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.462316   45037 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:43:12.462378   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:43:12.499660   45037 cri.go:89] found id: ""
	I0130 20:43:12.499690   45037 logs.go:276] 0 containers: []
	W0130 20:43:12.499699   45037 logs.go:278] No container was found matching "kindnet"
	I0130 20:43:12.499707   45037 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:43:12.499763   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:43:12.548931   45037 cri.go:89] found id: "84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:12.548961   45037 cri.go:89] found id: "5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:12.548969   45037 cri.go:89] found id: ""
	I0130 20:43:12.548978   45037 logs.go:276] 2 containers: [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5]
	I0130 20:43:12.549037   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.552983   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.557322   45037 logs.go:123] Gathering logs for container status ...
	I0130 20:43:12.557340   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:43:12.599784   45037 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:43:12.599812   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:43:12.716124   45037 logs.go:123] Gathering logs for kube-apiserver [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d] ...
	I0130 20:43:12.716156   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:12.766940   45037 logs.go:123] Gathering logs for storage-provisioner [5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5] ...
	I0130 20:43:12.766980   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:12.804026   45037 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:43:12.804059   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:43:13.165109   45037 logs.go:123] Gathering logs for coredns [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d] ...
	I0130 20:43:13.165153   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:13.204652   45037 logs.go:123] Gathering logs for kube-scheduler [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f] ...
	I0130 20:43:13.204679   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:13.242644   45037 logs.go:123] Gathering logs for storage-provisioner [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac] ...
	I0130 20:43:13.242675   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:13.282527   45037 logs.go:123] Gathering logs for kubelet ...
	I0130 20:43:13.282558   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:43:13.335128   45037 logs.go:123] Gathering logs for etcd [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18] ...
	I0130 20:43:13.335156   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:13.385564   45037 logs.go:123] Gathering logs for kube-controller-manager [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2] ...
	I0130 20:43:13.385599   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:13.449564   45037 logs.go:123] Gathering logs for dmesg ...
	I0130 20:43:13.449603   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:43:13.464376   45037 logs.go:123] Gathering logs for kube-proxy [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254] ...
	I0130 20:43:13.464406   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:09.825882   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:11.827628   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:14.325309   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:12.012894   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:14.014496   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:12.765167   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:14.765356   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:16.017083   45037 system_pods.go:59] 8 kube-system pods found
	I0130 20:43:16.017121   45037 system_pods.go:61] "coredns-5dd5756b68-jqzzv" [59f362b6-606e-4bcd-b5eb-c8822aaf8b9c] Running
	I0130 20:43:16.017128   45037 system_pods.go:61] "etcd-embed-certs-208583" [798094bf-2aac-4f39-afc1-4f873bdd08ee] Running
	I0130 20:43:16.017135   45037 system_pods.go:61] "kube-apiserver-embed-certs-208583" [b96b9f6e-b36a-47bf-8f6d-01f883501766] Running
	I0130 20:43:16.017141   45037 system_pods.go:61] "kube-controller-manager-embed-certs-208583" [3dbd9e29-5c95-40f5-acd8-9767f6ce7a03] Running
	I0130 20:43:16.017148   45037 system_pods.go:61] "kube-proxy-g7q5t" [47f109e0-7a56-472f-8c7e-ba2b138de352] Running
	I0130 20:43:16.017154   45037 system_pods.go:61] "kube-scheduler-embed-certs-208583" [e8a37eb1-599f-478f-bbc1-b44b1020f291] Running
	I0130 20:43:16.017165   45037 system_pods.go:61] "metrics-server-57f55c9bc5-ghg9n" [37700115-83e9-440a-b396-56f50adb6311] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:43:16.017172   45037 system_pods.go:61] "storage-provisioner" [15108916-a630-4208-99f7-5706db407b22] Running
	I0130 20:43:16.017185   45037 system_pods.go:74] duration metric: took 3.853859786s to wait for pod list to return data ...
	I0130 20:43:16.017198   45037 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:43:16.019949   45037 default_sa.go:45] found service account: "default"
	I0130 20:43:16.019967   45037 default_sa.go:55] duration metric: took 2.760881ms for default service account to be created ...
	I0130 20:43:16.019976   45037 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:43:16.025198   45037 system_pods.go:86] 8 kube-system pods found
	I0130 20:43:16.025219   45037 system_pods.go:89] "coredns-5dd5756b68-jqzzv" [59f362b6-606e-4bcd-b5eb-c8822aaf8b9c] Running
	I0130 20:43:16.025225   45037 system_pods.go:89] "etcd-embed-certs-208583" [798094bf-2aac-4f39-afc1-4f873bdd08ee] Running
	I0130 20:43:16.025229   45037 system_pods.go:89] "kube-apiserver-embed-certs-208583" [b96b9f6e-b36a-47bf-8f6d-01f883501766] Running
	I0130 20:43:16.025234   45037 system_pods.go:89] "kube-controller-manager-embed-certs-208583" [3dbd9e29-5c95-40f5-acd8-9767f6ce7a03] Running
	I0130 20:43:16.025238   45037 system_pods.go:89] "kube-proxy-g7q5t" [47f109e0-7a56-472f-8c7e-ba2b138de352] Running
	I0130 20:43:16.025242   45037 system_pods.go:89] "kube-scheduler-embed-certs-208583" [e8a37eb1-599f-478f-bbc1-b44b1020f291] Running
	I0130 20:43:16.025248   45037 system_pods.go:89] "metrics-server-57f55c9bc5-ghg9n" [37700115-83e9-440a-b396-56f50adb6311] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:43:16.025258   45037 system_pods.go:89] "storage-provisioner" [15108916-a630-4208-99f7-5706db407b22] Running
	I0130 20:43:16.025264   45037 system_pods.go:126] duration metric: took 5.282813ms to wait for k8s-apps to be running ...
	I0130 20:43:16.025270   45037 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:43:16.025309   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:43:16.043415   45037 system_svc.go:56] duration metric: took 18.134458ms WaitForService to wait for kubelet.
	I0130 20:43:16.043443   45037 kubeadm.go:581] duration metric: took 4m24.119724167s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:43:16.043472   45037 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:43:16.046999   45037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:43:16.047021   45037 node_conditions.go:123] node cpu capacity is 2
	I0130 20:43:16.047035   45037 node_conditions.go:105] duration metric: took 3.556321ms to run NodePressure ...
	I0130 20:43:16.047048   45037 start.go:228] waiting for startup goroutines ...
	I0130 20:43:16.047061   45037 start.go:233] waiting for cluster config update ...
	I0130 20:43:16.047078   45037 start.go:242] writing updated cluster config ...
	I0130 20:43:16.047368   45037 ssh_runner.go:195] Run: rm -f paused
	I0130 20:43:16.098760   45037 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 20:43:16.100739   45037 out.go:177] * Done! kubectl is now configured to use "embed-certs-208583" cluster and "default" namespace by default
	I0130 20:43:16.326450   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:18.824456   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:16.514335   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:19.014528   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:17.264059   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:19.264543   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:20.824649   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:23.324731   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:21.014634   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:23.513609   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:21.763771   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:23.764216   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:25.325575   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:27.825708   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:25.514335   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:27.506991   45441 pod_ready.go:81] duration metric: took 4m0.000368672s waiting for pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace to be "Ready" ...
	E0130 20:43:27.507020   45441 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 20:43:27.507037   45441 pod_ready.go:38] duration metric: took 4m11.059827725s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:43:27.507060   45441 kubeadm.go:640] restartCluster took 4m33.680532974s
	W0130 20:43:27.507128   45441 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 20:43:27.507159   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 20:43:26.264077   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:28.264502   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:30.764952   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:30.325157   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:32.325570   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:32.766530   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:35.264541   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:34.825545   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:36.825757   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:38.825922   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:37.764613   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:39.772391   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:41.253066   45441 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.745883202s)
	I0130 20:43:41.253138   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:43:41.267139   45441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:43:41.276814   45441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:43:41.286633   45441 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:43:41.286678   45441 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 20:43:41.340190   45441 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0130 20:43:41.340255   45441 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 20:43:41.491373   45441 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 20:43:41.491524   45441 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 20:43:41.491644   45441 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 20:43:41.735659   45441 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 20:43:41.737663   45441 out.go:204]   - Generating certificates and keys ...
	I0130 20:43:41.737778   45441 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 20:43:41.737875   45441 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 20:43:41.737961   45441 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 20:43:41.738034   45441 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 20:43:41.738116   45441 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 20:43:41.738215   45441 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 20:43:41.738295   45441 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 20:43:41.738381   45441 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 20:43:41.738481   45441 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 20:43:41.738542   45441 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 20:43:41.738578   45441 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 20:43:41.738633   45441 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 20:43:41.894828   45441 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 20:43:42.122408   45441 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 20:43:42.406185   45441 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 20:43:42.526794   45441 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 20:43:42.527473   45441 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 20:43:42.529906   45441 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 20:43:40.826403   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:43.324650   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:42.531956   45441 out.go:204]   - Booting up control plane ...
	I0130 20:43:42.532077   45441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 20:43:42.532175   45441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 20:43:42.532276   45441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 20:43:42.550440   45441 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 20:43:42.551432   45441 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 20:43:42.551515   45441 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 20:43:42.666449   45441 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 20:43:42.265430   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:44.268768   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:45.325121   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:47.325585   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:46.768728   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:49.264313   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:50.670814   45441 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004172 seconds
	I0130 20:43:50.670940   45441 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 20:43:50.693878   45441 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 20:43:51.228257   45441 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 20:43:51.228498   45441 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-877742 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 20:43:51.743336   45441 kubeadm.go:322] [bootstrap-token] Using token: hhyk9t.fiwckj4dbaljm18s
	I0130 20:43:51.744898   45441 out.go:204]   - Configuring RBAC rules ...
	I0130 20:43:51.744996   45441 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 20:43:51.755911   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 20:43:51.769124   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 20:43:51.773192   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 20:43:51.776643   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 20:43:51.780261   45441 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 20:43:51.807541   45441 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 20:43:52.070376   45441 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 20:43:52.167958   45441 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 20:43:52.167994   45441 kubeadm.go:322] 
	I0130 20:43:52.168050   45441 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 20:43:52.168061   45441 kubeadm.go:322] 
	I0130 20:43:52.168142   45441 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 20:43:52.168157   45441 kubeadm.go:322] 
	I0130 20:43:52.168193   45441 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 20:43:52.168254   45441 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 20:43:52.168325   45441 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 20:43:52.168336   45441 kubeadm.go:322] 
	I0130 20:43:52.168399   45441 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 20:43:52.168409   45441 kubeadm.go:322] 
	I0130 20:43:52.168469   45441 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 20:43:52.168480   45441 kubeadm.go:322] 
	I0130 20:43:52.168546   45441 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 20:43:52.168639   45441 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 20:43:52.168731   45441 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 20:43:52.168741   45441 kubeadm.go:322] 
	I0130 20:43:52.168834   45441 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 20:43:52.168928   45441 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 20:43:52.168938   45441 kubeadm.go:322] 
	I0130 20:43:52.169033   45441 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token hhyk9t.fiwckj4dbaljm18s \
	I0130 20:43:52.169145   45441 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 \
	I0130 20:43:52.169175   45441 kubeadm.go:322] 	--control-plane 
	I0130 20:43:52.169185   45441 kubeadm.go:322] 
	I0130 20:43:52.169274   45441 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 20:43:52.169283   45441 kubeadm.go:322] 
	I0130 20:43:52.169374   45441 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token hhyk9t.fiwckj4dbaljm18s \
	I0130 20:43:52.169485   45441 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 
	I0130 20:43:52.170103   45441 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 20:43:52.170128   45441 cni.go:84] Creating CNI manager for ""
	I0130 20:43:52.170138   45441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:43:52.171736   45441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:43:49.827004   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:51.828301   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:54.324951   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:52.173096   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:43:52.207763   45441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:43:52.239391   45441 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:43:52.239528   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:52.239550   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218 minikube.k8s.io/name=default-k8s-diff-port-877742 minikube.k8s.io/updated_at=2024_01_30T20_43_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:52.359837   45441 ops.go:34] apiserver oom_adj: -16
	I0130 20:43:52.622616   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:53.123165   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:53.622655   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:54.122819   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:54.623579   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:55.122784   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:51.265017   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:53.765449   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:56.826059   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:59.324992   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:55.622980   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:56.123436   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:56.623691   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:57.122685   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:57.623150   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:58.123358   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:58.623234   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:59.122804   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:59.623408   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:00.122730   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:56.264593   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:58.764827   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:00.765740   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:01.325185   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:03.325582   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:00.622649   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:01.123007   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:01.623488   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:02.123117   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:02.623186   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:03.122987   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:03.623625   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:04.123576   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:04.623493   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:05.123073   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:05.292330   45441 kubeadm.go:1088] duration metric: took 13.052870929s to wait for elevateKubeSystemPrivileges.
	I0130 20:44:05.292359   45441 kubeadm.go:406] StartCluster complete in 5m11.519002976s
	I0130 20:44:05.292376   45441 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:05.292446   45441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:44:05.294511   45441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:05.296490   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:44:05.296705   45441 config.go:182] Loaded profile config "default-k8s-diff-port-877742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:44:05.296739   45441 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:44:05.296797   45441 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-877742"
	I0130 20:44:05.296814   45441 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-877742"
	W0130 20:44:05.296823   45441 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:44:05.296872   45441 host.go:66] Checking if "default-k8s-diff-port-877742" exists ...
	I0130 20:44:05.297028   45441 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-877742"
	I0130 20:44:05.297068   45441 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-877742"
	I0130 20:44:05.297257   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.297282   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.297449   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.297476   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.297476   45441 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-877742"
	I0130 20:44:05.297498   45441 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-877742"
	W0130 20:44:05.297512   45441 addons.go:243] addon metrics-server should already be in state true
	I0130 20:44:05.297557   45441 host.go:66] Checking if "default-k8s-diff-port-877742" exists ...
	I0130 20:44:05.297934   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.297972   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.314618   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0130 20:44:05.314913   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34557
	I0130 20:44:05.315139   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.315638   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.315718   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.315751   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.316139   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.316295   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.316318   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.316342   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39221
	I0130 20:44:05.316649   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.316695   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.316729   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.316842   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.317131   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.317573   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.317598   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.317967   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.318507   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.318539   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.321078   45441 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-877742"
	W0130 20:44:05.321104   45441 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:44:05.321129   45441 host.go:66] Checking if "default-k8s-diff-port-877742" exists ...
	I0130 20:44:05.321503   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.321530   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.338144   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33785
	I0130 20:44:05.338180   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I0130 20:44:05.338717   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.338798   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.339318   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.339325   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.339343   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.339345   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.339804   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.339819   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.339987   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.340017   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.340889   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33925
	I0130 20:44:05.341348   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.341847   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.341870   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.342243   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:44:05.342328   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:44:05.344137   45441 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:44:05.342641   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.344745   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.345833   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:44:05.345871   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:44:05.345889   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:44:05.345936   45441 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:44:05.347567   45441 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:05.347585   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:44:05.347602   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:44:05.346048   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.348959   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.349635   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:44:05.349686   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.349853   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:44:05.350119   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:44:05.350404   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:44:05.350619   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:44:05.351435   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.351548   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:44:05.351565   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.351753   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:44:05.351924   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:44:05.352094   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:44:05.352237   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:44:05.366786   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40645
	I0130 20:44:05.367211   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.367744   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.367768   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.368174   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.368435   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.370411   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:44:05.370688   45441 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:05.370707   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:44:05.370726   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:44:05.375681   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:44:05.375726   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.375758   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:44:05.375778   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.375938   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:44:05.376136   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:44:05.376324   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:44:03.263112   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:05.264610   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:05.536173   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 20:44:05.547763   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:44:05.547783   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:44:05.561439   45441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:05.589801   45441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:05.619036   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:44:05.619063   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:44:05.672972   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:05.672993   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:44:05.753214   45441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:05.861799   45441 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-877742" context rescaled to 1 replicas
	I0130 20:44:05.861852   45441 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.52 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:44:05.863602   45441 out.go:177] * Verifying Kubernetes components...
	I0130 20:44:05.864716   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:07.418910   45441 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.882691784s)
	I0130 20:44:07.418945   45441 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0130 20:44:07.960063   45441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.370223433s)
	I0130 20:44:07.960120   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.960161   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.960158   45441 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.095417539s)
	I0130 20:44:07.960143   45441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.206889959s)
	I0130 20:44:07.960223   45441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.398756648s)
	I0130 20:44:07.960234   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.960247   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.960190   45441 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-877742" to be "Ready" ...
	I0130 20:44:07.960251   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.960319   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.961892   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.961892   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.961902   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.961919   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.961921   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.961902   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.961934   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.961936   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.961941   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.961944   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.961950   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.961955   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.961970   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.961980   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.961990   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.962309   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.962319   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.962340   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.962348   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.962350   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.962357   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.962380   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.962380   45441 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-877742"
	I0130 20:44:07.962420   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.962439   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.979672   45441 node_ready.go:49] node "default-k8s-diff-port-877742" has status "Ready":"True"
	I0130 20:44:07.979700   45441 node_ready.go:38] duration metric: took 19.437813ms waiting for node "default-k8s-diff-port-877742" to be "Ready" ...
	I0130 20:44:07.979713   45441 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:08.005989   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:08.006020   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:08.006266   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:08.006287   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:08.006286   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:08.008091   45441 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0130 20:44:05.329467   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:07.826212   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:08.009918   45441 addons.go:505] enable addons completed in 2.713172208s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0130 20:44:08.032478   45441 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tlb8h" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.539497   45441 pod_ready.go:92] pod "coredns-5dd5756b68-tlb8h" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.539527   45441 pod_ready.go:81] duration metric: took 1.50701275s waiting for pod "coredns-5dd5756b68-tlb8h" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.539537   45441 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.545068   45441 pod_ready.go:92] pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.545090   45441 pod_ready.go:81] duration metric: took 5.546681ms waiting for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.545099   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.550794   45441 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.550817   45441 pod_ready.go:81] duration metric: took 5.711144ms waiting for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.550829   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.556050   45441 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.556068   45441 pod_ready.go:81] duration metric: took 5.232882ms waiting for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.556076   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-59zvd" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.562849   45441 pod_ready.go:92] pod "kube-proxy-59zvd" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.562866   45441 pod_ready.go:81] duration metric: took 6.784197ms waiting for pod "kube-proxy-59zvd" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.562874   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.965815   45441 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.965846   45441 pod_ready.go:81] duration metric: took 402.96387ms waiting for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.965860   45441 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:07.265985   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:09.765494   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:10.326063   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:12.825921   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:11.974724   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:14.473879   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:12.265674   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:14.765546   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:15.325945   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:17.326041   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:16.974143   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:19.473552   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:16.765691   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:18.766995   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:19.824366   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:21.824919   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:24.318779   45819 pod_ready.go:81] duration metric: took 4m0.000598437s waiting for pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace to be "Ready" ...
	E0130 20:44:24.318808   45819 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 20:44:24.318829   45819 pod_ready.go:38] duration metric: took 4m1.194970045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:24.318872   45819 kubeadm.go:640] restartCluster took 5m9.285235807s
	W0130 20:44:24.318943   45819 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 20:44:24.318974   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 20:44:21.973193   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:23.974160   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:21.263429   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:23.263586   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:23.263609   44923 pod_ready.go:81] duration metric: took 4m0.006890289s waiting for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	E0130 20:44:23.263618   44923 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 20:44:23.263625   44923 pod_ready.go:38] duration metric: took 4m4.564565945s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:23.263637   44923 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:44:23.263671   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:44:23.263711   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:44:23.319983   44923 cri.go:89] found id: "ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:23.320013   44923 cri.go:89] found id: ""
	I0130 20:44:23.320023   44923 logs.go:276] 1 containers: [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e]
	I0130 20:44:23.320078   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.325174   44923 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:44:23.325239   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:44:23.375914   44923 cri.go:89] found id: "b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:23.375952   44923 cri.go:89] found id: ""
	I0130 20:44:23.375960   44923 logs.go:276] 1 containers: [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901]
	I0130 20:44:23.376003   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.380265   44923 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:44:23.380324   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:44:23.428507   44923 cri.go:89] found id: "3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:23.428534   44923 cri.go:89] found id: ""
	I0130 20:44:23.428544   44923 logs.go:276] 1 containers: [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c]
	I0130 20:44:23.428591   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.434113   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:44:23.434184   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:44:23.522888   44923 cri.go:89] found id: "39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:23.522915   44923 cri.go:89] found id: ""
	I0130 20:44:23.522922   44923 logs.go:276] 1 containers: [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79]
	I0130 20:44:23.522964   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.534952   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:44:23.535015   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:44:23.576102   44923 cri.go:89] found id: "880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:23.576129   44923 cri.go:89] found id: ""
	I0130 20:44:23.576138   44923 logs.go:276] 1 containers: [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689]
	I0130 20:44:23.576185   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.580463   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:44:23.580527   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:44:23.620990   44923 cri.go:89] found id: "10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:23.621011   44923 cri.go:89] found id: ""
	I0130 20:44:23.621018   44923 logs.go:276] 1 containers: [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f]
	I0130 20:44:23.621069   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.625706   44923 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:44:23.625762   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:44:23.666341   44923 cri.go:89] found id: ""
	I0130 20:44:23.666368   44923 logs.go:276] 0 containers: []
	W0130 20:44:23.666378   44923 logs.go:278] No container was found matching "kindnet"
	I0130 20:44:23.666384   44923 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:44:23.666441   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:44:23.707229   44923 cri.go:89] found id: "e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:23.707248   44923 cri.go:89] found id: "748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:23.707252   44923 cri.go:89] found id: ""
	I0130 20:44:23.707258   44923 logs.go:276] 2 containers: [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446]
	I0130 20:44:23.707314   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.711242   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.715859   44923 logs.go:123] Gathering logs for kube-apiserver [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e] ...
	I0130 20:44:23.715883   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:23.775696   44923 logs.go:123] Gathering logs for storage-provisioner [748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446] ...
	I0130 20:44:23.775722   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:23.817767   44923 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:44:23.817796   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:44:24.301934   44923 logs.go:123] Gathering logs for container status ...
	I0130 20:44:24.301969   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:44:24.361236   44923 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:44:24.361265   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:44:24.511849   44923 logs.go:123] Gathering logs for kube-controller-manager [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f] ...
	I0130 20:44:24.511886   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:24.573648   44923 logs.go:123] Gathering logs for etcd [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901] ...
	I0130 20:44:24.573683   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:24.620572   44923 logs.go:123] Gathering logs for kubelet ...
	I0130 20:44:24.620608   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:44:24.687312   44923 logs.go:123] Gathering logs for dmesg ...
	I0130 20:44:24.687346   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:44:24.702224   44923 logs.go:123] Gathering logs for kube-proxy [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689] ...
	I0130 20:44:24.702262   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:24.749188   44923 logs.go:123] Gathering logs for storage-provisioner [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0] ...
	I0130 20:44:24.749218   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:24.793069   44923 logs.go:123] Gathering logs for coredns [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c] ...
	I0130 20:44:24.793093   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:24.829705   44923 logs.go:123] Gathering logs for kube-scheduler [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79] ...
	I0130 20:44:24.829730   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:29.263901   45819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.944900372s)
	I0130 20:44:29.263978   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:29.277198   45819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:44:29.286661   45819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:44:29.297088   45819 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:44:29.297129   45819 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0130 20:44:29.360347   45819 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0130 20:44:29.360446   45819 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 20:44:29.516880   45819 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 20:44:29.517075   45819 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 20:44:29.517217   45819 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 20:44:29.756175   45819 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 20:44:29.756323   45819 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 20:44:29.764820   45819 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0130 20:44:29.907654   45819 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 20:44:26.473595   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:28.473808   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:29.909307   45819 out.go:204]   - Generating certificates and keys ...
	I0130 20:44:29.909397   45819 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 20:44:29.909484   45819 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 20:44:29.909578   45819 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 20:44:29.909674   45819 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 20:44:29.909784   45819 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 20:44:29.909866   45819 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 20:44:29.909974   45819 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 20:44:29.910057   45819 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 20:44:29.910163   45819 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 20:44:29.910266   45819 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 20:44:29.910316   45819 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 20:44:29.910409   45819 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 20:44:29.974805   45819 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 20:44:30.281258   45819 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 20:44:30.605015   45819 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 20:44:30.782125   45819 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 20:44:30.783329   45819 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 20:44:27.369691   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:44:27.393279   44923 api_server.go:72] duration metric: took 4m16.430750077s to wait for apiserver process to appear ...
	I0130 20:44:27.393306   44923 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:44:27.393355   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:44:27.393434   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:44:27.443366   44923 cri.go:89] found id: "ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:27.443390   44923 cri.go:89] found id: ""
	I0130 20:44:27.443400   44923 logs.go:276] 1 containers: [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e]
	I0130 20:44:27.443457   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.448963   44923 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:44:27.449021   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:44:27.502318   44923 cri.go:89] found id: "b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:27.502341   44923 cri.go:89] found id: ""
	I0130 20:44:27.502348   44923 logs.go:276] 1 containers: [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901]
	I0130 20:44:27.502398   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.507295   44923 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:44:27.507352   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:44:27.548224   44923 cri.go:89] found id: "3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:27.548247   44923 cri.go:89] found id: ""
	I0130 20:44:27.548255   44923 logs.go:276] 1 containers: [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c]
	I0130 20:44:27.548299   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.552806   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:44:27.552864   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:44:27.608403   44923 cri.go:89] found id: "39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:27.608434   44923 cri.go:89] found id: ""
	I0130 20:44:27.608444   44923 logs.go:276] 1 containers: [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79]
	I0130 20:44:27.608523   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.613370   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:44:27.613435   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:44:27.668380   44923 cri.go:89] found id: "880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:27.668406   44923 cri.go:89] found id: ""
	I0130 20:44:27.668417   44923 logs.go:276] 1 containers: [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689]
	I0130 20:44:27.668470   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.673171   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:44:27.673231   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:44:27.720444   44923 cri.go:89] found id: "10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:27.720473   44923 cri.go:89] found id: ""
	I0130 20:44:27.720483   44923 logs.go:276] 1 containers: [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f]
	I0130 20:44:27.720546   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.725007   44923 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:44:27.725062   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:44:27.772186   44923 cri.go:89] found id: ""
	I0130 20:44:27.772214   44923 logs.go:276] 0 containers: []
	W0130 20:44:27.772224   44923 logs.go:278] No container was found matching "kindnet"
	I0130 20:44:27.772231   44923 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:44:27.772288   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:44:27.813222   44923 cri.go:89] found id: "e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:27.813259   44923 cri.go:89] found id: "748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:27.813268   44923 cri.go:89] found id: ""
	I0130 20:44:27.813286   44923 logs.go:276] 2 containers: [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446]
	I0130 20:44:27.813347   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.817565   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.821737   44923 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:44:27.821759   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:44:28.299900   44923 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:44:28.299933   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:44:28.441830   44923 logs.go:123] Gathering logs for storage-provisioner [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0] ...
	I0130 20:44:28.441866   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:28.485579   44923 logs.go:123] Gathering logs for dmesg ...
	I0130 20:44:28.485611   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:44:28.500668   44923 logs.go:123] Gathering logs for kube-controller-manager [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f] ...
	I0130 20:44:28.500691   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:28.558472   44923 logs.go:123] Gathering logs for storage-provisioner [748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446] ...
	I0130 20:44:28.558502   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:28.604655   44923 logs.go:123] Gathering logs for kubelet ...
	I0130 20:44:28.604687   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:44:28.670010   44923 logs.go:123] Gathering logs for kube-proxy [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689] ...
	I0130 20:44:28.670041   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:28.712222   44923 logs.go:123] Gathering logs for coredns [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c] ...
	I0130 20:44:28.712259   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:28.764243   44923 logs.go:123] Gathering logs for kube-scheduler [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79] ...
	I0130 20:44:28.764276   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:28.801930   44923 logs.go:123] Gathering logs for container status ...
	I0130 20:44:28.801956   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:44:28.848585   44923 logs.go:123] Gathering logs for kube-apiserver [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e] ...
	I0130 20:44:28.848612   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:28.902903   44923 logs.go:123] Gathering logs for etcd [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901] ...
	I0130 20:44:28.902936   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:30.785050   45819 out.go:204]   - Booting up control plane ...
	I0130 20:44:30.785155   45819 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 20:44:30.790853   45819 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 20:44:30.798657   45819 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 20:44:30.799425   45819 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 20:44:30.801711   45819 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 20:44:30.475584   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:32.973843   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:34.974144   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:31.454103   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:44:31.460009   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 200:
	ok
	I0130 20:44:31.461505   44923 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 20:44:31.461527   44923 api_server.go:131] duration metric: took 4.068214052s to wait for apiserver health ...
	I0130 20:44:31.461537   44923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:44:31.461563   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:44:31.461626   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:44:31.509850   44923 cri.go:89] found id: "ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:31.509874   44923 cri.go:89] found id: ""
	I0130 20:44:31.509884   44923 logs.go:276] 1 containers: [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e]
	I0130 20:44:31.509941   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.514078   44923 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:44:31.514136   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:44:31.555581   44923 cri.go:89] found id: "b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:31.555605   44923 cri.go:89] found id: ""
	I0130 20:44:31.555613   44923 logs.go:276] 1 containers: [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901]
	I0130 20:44:31.555674   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.559888   44923 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:44:31.559948   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:44:31.620256   44923 cri.go:89] found id: "3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:31.620285   44923 cri.go:89] found id: ""
	I0130 20:44:31.620295   44923 logs.go:276] 1 containers: [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c]
	I0130 20:44:31.620352   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.626003   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:44:31.626064   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:44:31.662862   44923 cri.go:89] found id: "39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:31.662889   44923 cri.go:89] found id: ""
	I0130 20:44:31.662899   44923 logs.go:276] 1 containers: [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79]
	I0130 20:44:31.662972   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.668242   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:44:31.668306   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:44:31.717065   44923 cri.go:89] found id: "880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:31.717089   44923 cri.go:89] found id: ""
	I0130 20:44:31.717098   44923 logs.go:276] 1 containers: [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689]
	I0130 20:44:31.717160   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.722195   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:44:31.722250   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:44:31.779789   44923 cri.go:89] found id: "10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:31.779812   44923 cri.go:89] found id: ""
	I0130 20:44:31.779821   44923 logs.go:276] 1 containers: [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f]
	I0130 20:44:31.779894   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.784710   44923 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:44:31.784776   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:44:31.826045   44923 cri.go:89] found id: ""
	I0130 20:44:31.826073   44923 logs.go:276] 0 containers: []
	W0130 20:44:31.826082   44923 logs.go:278] No container was found matching "kindnet"
	I0130 20:44:31.826087   44923 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:44:31.826131   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:44:31.868212   44923 cri.go:89] found id: "e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:31.868236   44923 cri.go:89] found id: "748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:31.868243   44923 cri.go:89] found id: ""
	I0130 20:44:31.868253   44923 logs.go:276] 2 containers: [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446]
	I0130 20:44:31.868314   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.873019   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.877432   44923 logs.go:123] Gathering logs for storage-provisioner [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0] ...
	I0130 20:44:31.877456   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:31.915888   44923 logs.go:123] Gathering logs for storage-provisioner [748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446] ...
	I0130 20:44:31.915915   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:31.972950   44923 logs.go:123] Gathering logs for kubelet ...
	I0130 20:44:31.972978   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:44:32.028993   44923 logs.go:123] Gathering logs for dmesg ...
	I0130 20:44:32.029028   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:44:32.046602   44923 logs.go:123] Gathering logs for etcd [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901] ...
	I0130 20:44:32.046633   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:32.094088   44923 logs.go:123] Gathering logs for kube-proxy [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689] ...
	I0130 20:44:32.094123   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:32.138616   44923 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:44:32.138645   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:44:32.526995   44923 logs.go:123] Gathering logs for container status ...
	I0130 20:44:32.527033   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:44:32.591970   44923 logs.go:123] Gathering logs for kube-apiserver [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e] ...
	I0130 20:44:32.592003   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:32.655438   44923 logs.go:123] Gathering logs for coredns [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c] ...
	I0130 20:44:32.655466   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:32.707131   44923 logs.go:123] Gathering logs for kube-scheduler [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79] ...
	I0130 20:44:32.707163   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:32.749581   44923 logs.go:123] Gathering logs for kube-controller-manager [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f] ...
	I0130 20:44:32.749610   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:32.815778   44923 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:44:32.815805   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:44:35.448121   44923 system_pods.go:59] 8 kube-system pods found
	I0130 20:44:35.448155   44923 system_pods.go:61] "coredns-76f75df574-d4c7t" [a8701b4d-0616-4c05-9ba0-0157adae2d13] Running
	I0130 20:44:35.448162   44923 system_pods.go:61] "etcd-no-preload-473743" [ed931ab3-95d8-4115-ae97-1c274ed8432d] Running
	I0130 20:44:35.448169   44923 system_pods.go:61] "kube-apiserver-no-preload-473743" [64b9b17c-6df5-41db-a308-b0deba016c9d] Running
	I0130 20:44:35.448175   44923 system_pods.go:61] "kube-controller-manager-no-preload-473743" [a28d8dc6-244a-4dfa-9d7f-468281823332] Running
	I0130 20:44:35.448181   44923 system_pods.go:61] "kube-proxy-zklzt" [fa94d19c-b0d6-4e78-86e8-e6b5f3608753] Running
	I0130 20:44:35.448188   44923 system_pods.go:61] "kube-scheduler-no-preload-473743" [b8f8066b-8644-42c3-b47a-52e34210e410] Running
	I0130 20:44:35.448198   44923 system_pods.go:61] "metrics-server-57f55c9bc5-wzb2g" [cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:44:35.448210   44923 system_pods.go:61] "storage-provisioner" [a257b079-cb6e-45fd-b05d-9ad6fa26225e] Running
	I0130 20:44:35.448221   44923 system_pods.go:74] duration metric: took 3.986678023s to wait for pod list to return data ...
	I0130 20:44:35.448227   44923 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:44:35.451377   44923 default_sa.go:45] found service account: "default"
	I0130 20:44:35.451397   44923 default_sa.go:55] duration metric: took 3.162882ms for default service account to be created ...
	I0130 20:44:35.451404   44923 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:44:35.457941   44923 system_pods.go:86] 8 kube-system pods found
	I0130 20:44:35.457962   44923 system_pods.go:89] "coredns-76f75df574-d4c7t" [a8701b4d-0616-4c05-9ba0-0157adae2d13] Running
	I0130 20:44:35.457969   44923 system_pods.go:89] "etcd-no-preload-473743" [ed931ab3-95d8-4115-ae97-1c274ed8432d] Running
	I0130 20:44:35.457976   44923 system_pods.go:89] "kube-apiserver-no-preload-473743" [64b9b17c-6df5-41db-a308-b0deba016c9d] Running
	I0130 20:44:35.457983   44923 system_pods.go:89] "kube-controller-manager-no-preload-473743" [a28d8dc6-244a-4dfa-9d7f-468281823332] Running
	I0130 20:44:35.457992   44923 system_pods.go:89] "kube-proxy-zklzt" [fa94d19c-b0d6-4e78-86e8-e6b5f3608753] Running
	I0130 20:44:35.457999   44923 system_pods.go:89] "kube-scheduler-no-preload-473743" [b8f8066b-8644-42c3-b47a-52e34210e410] Running
	I0130 20:44:35.458013   44923 system_pods.go:89] "metrics-server-57f55c9bc5-wzb2g" [cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:44:35.458023   44923 system_pods.go:89] "storage-provisioner" [a257b079-cb6e-45fd-b05d-9ad6fa26225e] Running
	I0130 20:44:35.458032   44923 system_pods.go:126] duration metric: took 6.622973ms to wait for k8s-apps to be running ...
	I0130 20:44:35.458040   44923 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:44:35.458085   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:35.478158   44923 system_svc.go:56] duration metric: took 20.107762ms WaitForService to wait for kubelet.
	I0130 20:44:35.478182   44923 kubeadm.go:581] duration metric: took 4m24.515659177s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:44:35.478205   44923 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:44:35.481624   44923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:44:35.481649   44923 node_conditions.go:123] node cpu capacity is 2
	I0130 20:44:35.481661   44923 node_conditions.go:105] duration metric: took 3.450762ms to run NodePressure ...
	I0130 20:44:35.481674   44923 start.go:228] waiting for startup goroutines ...
	I0130 20:44:35.481682   44923 start.go:233] waiting for cluster config update ...
	I0130 20:44:35.481695   44923 start.go:242] writing updated cluster config ...
	I0130 20:44:35.481966   44923 ssh_runner.go:195] Run: rm -f paused
	I0130 20:44:35.534192   44923 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0130 20:44:35.537286   44923 out.go:177] * Done! kubectl is now configured to use "no-preload-473743" cluster and "default" namespace by default
	I0130 20:44:36.975176   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:39.472594   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:40.808532   45819 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.005048 seconds
	I0130 20:44:40.808703   45819 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 20:44:40.821445   45819 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 20:44:41.350196   45819 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 20:44:41.350372   45819 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-150971 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0130 20:44:41.859169   45819 kubeadm.go:322] [bootstrap-token] Using token: vlkrdr.8ubylscclgt88ll2
	I0130 20:44:41.862311   45819 out.go:204]   - Configuring RBAC rules ...
	I0130 20:44:41.862450   45819 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 20:44:41.870072   45819 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 20:44:41.874429   45819 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 20:44:41.883936   45819 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 20:44:41.887738   45819 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 20:44:41.963361   45819 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 20:44:42.299030   45819 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 20:44:42.300623   45819 kubeadm.go:322] 
	I0130 20:44:42.300708   45819 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 20:44:42.300721   45819 kubeadm.go:322] 
	I0130 20:44:42.300820   45819 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 20:44:42.300845   45819 kubeadm.go:322] 
	I0130 20:44:42.300886   45819 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 20:44:42.300975   45819 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 20:44:42.301048   45819 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 20:44:42.301061   45819 kubeadm.go:322] 
	I0130 20:44:42.301126   45819 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 20:44:42.301241   45819 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 20:44:42.301309   45819 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 20:44:42.301326   45819 kubeadm.go:322] 
	I0130 20:44:42.301417   45819 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0130 20:44:42.301482   45819 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 20:44:42.301488   45819 kubeadm.go:322] 
	I0130 20:44:42.301554   45819 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vlkrdr.8ubylscclgt88ll2 \
	I0130 20:44:42.301684   45819 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 \
	I0130 20:44:42.301717   45819 kubeadm.go:322]     --control-plane 	  
	I0130 20:44:42.301726   45819 kubeadm.go:322] 
	I0130 20:44:42.301827   45819 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 20:44:42.301844   45819 kubeadm.go:322] 
	I0130 20:44:42.301984   45819 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vlkrdr.8ubylscclgt88ll2 \
	I0130 20:44:42.302116   45819 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 
	I0130 20:44:42.302689   45819 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 20:44:42.302726   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:44:42.302739   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:44:42.305197   45819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:44:42.306389   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:44:42.357619   45819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:44:42.381081   45819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:44:42.381189   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:42.381196   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218 minikube.k8s.io/name=old-k8s-version-150971 minikube.k8s.io/updated_at=2024_01_30T20_44_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:42.406368   45819 ops.go:34] apiserver oom_adj: -16
	I0130 20:44:42.639356   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:43.139439   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:43.640260   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:44.140080   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:44.639587   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:41.473598   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:43.474059   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:45.140354   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:45.640062   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:46.140282   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:46.639400   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:47.140308   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:47.640045   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:48.139406   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:48.640423   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:49.139702   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:49.640036   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:45.973530   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:47.974364   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:49.974551   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:50.139435   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:50.639471   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:51.140088   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:51.639444   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:52.139401   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:52.639731   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:53.140050   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:53.639411   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:54.139942   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:54.640279   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:52.473624   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:54.474924   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:55.139610   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:55.639431   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:56.140267   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:56.639444   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:57.140068   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:57.296527   45819 kubeadm.go:1088] duration metric: took 14.915402679s to wait for elevateKubeSystemPrivileges.
	I0130 20:44:57.296567   45819 kubeadm.go:406] StartCluster complete in 5m42.316503122s
	I0130 20:44:57.296588   45819 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:57.296672   45819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:44:57.298762   45819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:57.299005   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:44:57.299123   45819 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:44:57.299208   45819 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-150971"
	I0130 20:44:57.299220   45819 config.go:182] Loaded profile config "old-k8s-version-150971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 20:44:57.299229   45819 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-150971"
	W0130 20:44:57.299241   45819 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:44:57.299220   45819 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-150971"
	I0130 20:44:57.299300   45819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-150971"
	I0130 20:44:57.299315   45819 host.go:66] Checking if "old-k8s-version-150971" exists ...
	I0130 20:44:57.299247   45819 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-150971"
	I0130 20:44:57.299387   45819 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-150971"
	W0130 20:44:57.299397   45819 addons.go:243] addon metrics-server should already be in state true
	I0130 20:44:57.299433   45819 host.go:66] Checking if "old-k8s-version-150971" exists ...
	I0130 20:44:57.299705   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.299726   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.299756   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.299760   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.299796   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.299897   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.319159   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38823
	I0130 20:44:57.319202   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45589
	I0130 20:44:57.319167   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34823
	I0130 20:44:57.319578   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.319707   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.319771   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.320071   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.320103   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.320242   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.320261   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.320408   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.320423   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.320586   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.320630   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.321140   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.321158   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.321591   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.321624   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.321675   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.321705   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.325091   45819 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-150971"
	W0130 20:44:57.325106   45819 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:44:57.325125   45819 host.go:66] Checking if "old-k8s-version-150971" exists ...
	I0130 20:44:57.325420   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.325442   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.342652   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
	I0130 20:44:57.342787   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41961
	I0130 20:44:57.343203   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.343303   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.343745   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.343779   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.343848   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44027
	I0130 20:44:57.343887   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.343903   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.344220   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.344244   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.344220   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.344493   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.344494   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.344707   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.344730   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.345083   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.346139   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.346172   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.346830   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:44:57.346891   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:44:57.348974   45819 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:44:57.350330   45819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:44:57.350364   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:44:57.351707   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:44:57.351729   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:44:57.351684   45819 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:57.351795   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:44:57.351821   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:44:57.356145   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.356428   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.356595   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:44:57.356621   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.356767   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:44:57.357040   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:44:57.357095   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:44:57.357123   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.357218   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:44:57.357266   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:44:57.357458   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:44:57.357451   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:44:57.357617   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:44:57.357754   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:44:57.362806   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I0130 20:44:57.363167   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.363742   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.363770   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.364074   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.364280   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.365877   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:44:57.366086   45819 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:57.366096   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:44:57.366107   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:44:57.369237   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.369890   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:44:57.369930   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:44:57.369968   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.370351   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:44:57.370563   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:44:57.370712   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:44:57.509329   45819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:57.535146   45819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:57.536528   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 20:44:57.559042   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:44:57.559066   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:44:57.643054   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:44:57.643081   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:44:57.773561   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:57.773588   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:44:57.848668   45819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:57.910205   45819 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-150971" context rescaled to 1 replicas
	I0130 20:44:57.910247   45819 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:44:57.912390   45819 out.go:177] * Verifying Kubernetes components...
	I0130 20:44:57.913764   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:58.721986   45819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.186811658s)
	I0130 20:44:58.722033   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722045   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722145   45819 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.185575635s)
	I0130 20:44:58.722210   45819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.212845439s)
	I0130 20:44:58.722213   45819 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0130 20:44:58.722254   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722271   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722347   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.722359   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.722371   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.722381   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722391   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722537   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.722576   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.722593   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.722611   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722621   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722659   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.722675   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.724251   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.724291   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.724304   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.798383   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.798410   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.798745   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.798767   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.798816   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:59.125243   45819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.276531373s)
	I0130 20:44:59.125305   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:59.125322   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:59.125256   45819 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.211465342s)
	I0130 20:44:59.125360   45819 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-150971" to be "Ready" ...
	I0130 20:44:59.125612   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:59.125639   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:59.125650   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:59.125650   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:59.125659   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:59.125902   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:59.125953   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:59.125963   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:59.125972   45819 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-150971"
	I0130 20:44:59.127634   45819 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 20:44:59.129415   45819 addons.go:505] enable addons completed in 1.830294624s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 20:44:59.141691   45819 node_ready.go:49] node "old-k8s-version-150971" has status "Ready":"True"
	I0130 20:44:59.141715   45819 node_ready.go:38] duration metric: took 16.331635ms waiting for node "old-k8s-version-150971" to be "Ready" ...
	I0130 20:44:59.141725   45819 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:59.146645   45819 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-7qhmc" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:56.475086   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:58.973370   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:00.161718   45819 pod_ready.go:92] pod "coredns-5644d7b6d9-7qhmc" in "kube-system" namespace has status "Ready":"True"
	I0130 20:45:00.161741   45819 pod_ready.go:81] duration metric: took 1.015069343s waiting for pod "coredns-5644d7b6d9-7qhmc" in "kube-system" namespace to be "Ready" ...
	I0130 20:45:00.161754   45819 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zbdxm" in "kube-system" namespace to be "Ready" ...
	I0130 20:45:00.668280   45819 pod_ready.go:92] pod "kube-proxy-zbdxm" in "kube-system" namespace has status "Ready":"True"
	I0130 20:45:00.668313   45819 pod_ready.go:81] duration metric: took 506.550797ms waiting for pod "kube-proxy-zbdxm" in "kube-system" namespace to be "Ready" ...
	I0130 20:45:00.668328   45819 pod_ready.go:38] duration metric: took 1.526591158s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:45:00.668343   45819 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:45:00.668398   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:45:00.682119   45819 api_server.go:72] duration metric: took 2.771845703s to wait for apiserver process to appear ...
	I0130 20:45:00.682143   45819 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:45:00.682167   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:45:00.687603   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0130 20:45:00.688287   45819 api_server.go:141] control plane version: v1.16.0
	I0130 20:45:00.688302   45819 api_server.go:131] duration metric: took 6.153997ms to wait for apiserver health ...
	I0130 20:45:00.688309   45819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:45:00.691917   45819 system_pods.go:59] 4 kube-system pods found
	I0130 20:45:00.691936   45819 system_pods.go:61] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:00.691942   45819 system_pods.go:61] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:00.691948   45819 system_pods.go:61] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:00.691954   45819 system_pods.go:61] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:45:00.691962   45819 system_pods.go:74] duration metric: took 3.648521ms to wait for pod list to return data ...
	I0130 20:45:00.691970   45819 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:45:00.694229   45819 default_sa.go:45] found service account: "default"
	I0130 20:45:00.694250   45819 default_sa.go:55] duration metric: took 2.274248ms for default service account to be created ...
	I0130 20:45:00.694258   45819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:45:00.698156   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:00.698179   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:00.698187   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:00.698198   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:00.698210   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:45:00.698234   45819 retry.go:31] will retry after 277.03208ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:00.979637   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:00.979660   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:00.979665   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:00.979671   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:00.979677   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:45:00.979694   45819 retry.go:31] will retry after 341.469517ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:01.326631   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:01.326666   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:01.326674   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:01.326683   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:01.326689   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:01.326713   45819 retry.go:31] will retry after 487.104661ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:01.818702   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:01.818733   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:01.818742   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:01.818752   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:01.818759   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:01.818779   45819 retry.go:31] will retry after 574.423042ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:02.398901   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:02.398940   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:02.398949   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:02.398959   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:02.398966   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:02.398986   45819 retry.go:31] will retry after 741.538469ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:03.145137   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:03.145162   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:03.145168   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:03.145174   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:03.145179   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:03.145194   45819 retry.go:31] will retry after 742.915086ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:03.892722   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:03.892748   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:03.892753   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:03.892759   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:03.892764   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:03.892779   45819 retry.go:31] will retry after 786.727719ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:01.473056   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:03.473346   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:04.685933   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:04.685967   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:04.685976   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:04.685985   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:04.685993   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:04.686016   45819 retry.go:31] will retry after 1.232157955s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:05.923020   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:05.923045   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:05.923050   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:05.923056   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:05.923061   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:05.923076   45819 retry.go:31] will retry after 1.652424416s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:07.580982   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:07.581007   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:07.581013   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:07.581019   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:07.581026   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:07.581042   45819 retry.go:31] will retry after 1.774276151s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:09.360073   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:09.360098   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:09.360103   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:09.360110   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:09.360115   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:09.360133   45819 retry.go:31] will retry after 2.786181653s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:05.975152   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:07.975274   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:12.151191   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:12.151215   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:12.151221   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:12.151227   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:12.151232   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:12.151258   45819 retry.go:31] will retry after 3.456504284s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:10.472793   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:12.474310   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:14.977715   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:15.613679   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:15.613705   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:15.613711   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:15.613718   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:15.613722   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:15.613741   45819 retry.go:31] will retry after 4.434906632s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:17.472993   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:19.473530   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:20.053023   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:20.053050   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:20.053055   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:20.053062   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:20.053066   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:20.053082   45819 retry.go:31] will retry after 3.910644554s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:23.969998   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:23.970027   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:23.970035   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:23.970047   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:23.970053   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:23.970075   45819 retry.go:31] will retry after 4.907431581s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:21.473946   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:23.973965   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:28.881886   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:28.881911   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:28.881917   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:28.881924   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:28.881929   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:28.881956   45819 retry.go:31] will retry after 7.594967181s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:26.473519   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:28.474676   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:30.972445   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:32.973156   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:34.973590   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:36.482226   45819 system_pods.go:86] 5 kube-system pods found
	I0130 20:45:36.482255   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:36.482261   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:36.482267   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Pending
	I0130 20:45:36.482277   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:36.482284   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:36.482306   45819 retry.go:31] will retry after 8.875079493s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:36.974189   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:39.474803   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:41.973709   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:43.974130   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:45.361733   45819 system_pods.go:86] 5 kube-system pods found
	I0130 20:45:45.361760   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:45.361766   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:45.361772   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:45:45.361781   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:45.361789   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:45.361820   45819 retry.go:31] will retry after 9.918306048s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0130 20:45:45.976853   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:48.476619   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:50.974748   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:52.975900   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:55.285765   45819 system_pods.go:86] 6 kube-system pods found
	I0130 20:45:55.285793   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:55.285801   45819 system_pods.go:89] "kube-apiserver-old-k8s-version-150971" [14975616-ba41-4199-b0e3-179dc01def2d] Pending
	I0130 20:45:55.285807   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:55.285813   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:45:55.285822   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:55.285828   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:55.285849   45819 retry.go:31] will retry after 12.684125727s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0130 20:45:55.473705   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:57.973533   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:59.974108   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:02.473825   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:04.973953   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:07.975898   45819 system_pods.go:86] 8 kube-system pods found
	I0130 20:46:07.975923   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:46:07.975929   45819 system_pods.go:89] "etcd-old-k8s-version-150971" [21884345-e587-4bae-88b9-78e0bdacf954] Running
	I0130 20:46:07.975933   45819 system_pods.go:89] "kube-apiserver-old-k8s-version-150971" [14975616-ba41-4199-b0e3-179dc01def2d] Running
	I0130 20:46:07.975937   45819 system_pods.go:89] "kube-controller-manager-old-k8s-version-150971" [f0cfbd77-f00e-4d40-a301-f24f6ed937e1] Pending
	I0130 20:46:07.975941   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:46:07.975944   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:46:07.975951   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:46:07.975955   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:46:07.975969   45819 retry.go:31] will retry after 15.59894457s: missing components: kube-controller-manager
	I0130 20:46:07.472712   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:09.474175   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:11.478228   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:13.973190   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:16.473264   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:18.474418   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:23.581862   45819 system_pods.go:86] 8 kube-system pods found
	I0130 20:46:23.581890   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:46:23.581895   45819 system_pods.go:89] "etcd-old-k8s-version-150971" [21884345-e587-4bae-88b9-78e0bdacf954] Running
	I0130 20:46:23.581899   45819 system_pods.go:89] "kube-apiserver-old-k8s-version-150971" [14975616-ba41-4199-b0e3-179dc01def2d] Running
	I0130 20:46:23.581904   45819 system_pods.go:89] "kube-controller-manager-old-k8s-version-150971" [f0cfbd77-f00e-4d40-a301-f24f6ed937e1] Running
	I0130 20:46:23.581907   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:46:23.581911   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:46:23.581918   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:46:23.581923   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:46:23.581932   45819 system_pods.go:126] duration metric: took 1m22.887668504s to wait for k8s-apps to be running ...
	I0130 20:46:23.581939   45819 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:46:23.581986   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:46:23.604099   45819 system_svc.go:56] duration metric: took 22.14886ms WaitForService to wait for kubelet.
	I0130 20:46:23.604134   45819 kubeadm.go:581] duration metric: took 1m25.693865663s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:46:23.604159   45819 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:46:23.607539   45819 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:46:23.607567   45819 node_conditions.go:123] node cpu capacity is 2
	I0130 20:46:23.607580   45819 node_conditions.go:105] duration metric: took 3.415829ms to run NodePressure ...
	I0130 20:46:23.607594   45819 start.go:228] waiting for startup goroutines ...
	I0130 20:46:23.607602   45819 start.go:233] waiting for cluster config update ...
	I0130 20:46:23.607615   45819 start.go:242] writing updated cluster config ...
	I0130 20:46:23.607933   45819 ssh_runner.go:195] Run: rm -f paused
	I0130 20:46:23.658357   45819 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0130 20:46:23.660375   45819 out.go:177] 
	W0130 20:46:23.661789   45819 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0130 20:46:23.663112   45819 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0130 20:46:23.664623   45819 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-150971" cluster and "default" namespace by default
	I0130 20:46:20.474791   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:22.973143   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:24.974320   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:27.474508   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:29.973471   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:31.973727   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:33.974180   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:36.472928   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:38.474336   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:40.973509   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:42.973942   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:45.473120   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:47.972943   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:49.973756   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:51.973913   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:54.472597   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:56.473076   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:58.974262   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:01.476906   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:03.974275   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:06.474453   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:08.973144   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:10.973407   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:12.974842   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:15.473765   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:17.474938   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:19.973849   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:21.974660   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:23.977144   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:26.479595   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:28.975572   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:31.473715   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:33.974243   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:36.472321   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:38.473133   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:40.973786   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:43.473691   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:45.476882   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:47.975923   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:50.474045   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:52.474411   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:54.474531   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:56.973542   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:58.974226   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:00.975045   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:03.473440   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:05.473667   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:07.973417   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:09.978199   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:09.978230   45441 pod_ready.go:81] duration metric: took 4m0.012361166s waiting for pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace to be "Ready" ...
	E0130 20:48:09.978243   45441 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 20:48:09.978253   45441 pod_ready.go:38] duration metric: took 4m1.998529694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:48:09.978276   45441 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:48:09.978323   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:48:09.978403   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:48:10.038921   45441 cri.go:89] found id: "39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:10.038949   45441 cri.go:89] found id: ""
	I0130 20:48:10.038958   45441 logs.go:276] 1 containers: [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481]
	I0130 20:48:10.039017   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.043851   45441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:48:10.043902   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:48:10.088920   45441 cri.go:89] found id: "1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:10.088945   45441 cri.go:89] found id: ""
	I0130 20:48:10.088952   45441 logs.go:276] 1 containers: [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15]
	I0130 20:48:10.089001   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.094186   45441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:48:10.094267   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:48:10.143350   45441 cri.go:89] found id: "215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:10.143380   45441 cri.go:89] found id: ""
	I0130 20:48:10.143390   45441 logs.go:276] 1 containers: [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb]
	I0130 20:48:10.143450   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.148357   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:48:10.148426   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:48:10.187812   45441 cri.go:89] found id: "8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:10.187848   45441 cri.go:89] found id: ""
	I0130 20:48:10.187858   45441 logs.go:276] 1 containers: [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7]
	I0130 20:48:10.187914   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.192049   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:48:10.192109   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:48:10.241052   45441 cri.go:89] found id: "c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:10.241079   45441 cri.go:89] found id: ""
	I0130 20:48:10.241088   45441 logs.go:276] 1 containers: [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe]
	I0130 20:48:10.241139   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.245711   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:48:10.245763   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:48:10.287115   45441 cri.go:89] found id: "1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:10.287139   45441 cri.go:89] found id: ""
	I0130 20:48:10.287148   45441 logs.go:276] 1 containers: [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed]
	I0130 20:48:10.287194   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.291627   45441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:48:10.291697   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:48:10.341321   45441 cri.go:89] found id: ""
	I0130 20:48:10.341346   45441 logs.go:276] 0 containers: []
	W0130 20:48:10.341356   45441 logs.go:278] No container was found matching "kindnet"
	I0130 20:48:10.341362   45441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:48:10.341420   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:48:10.385515   45441 cri.go:89] found id: "f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:10.385543   45441 cri.go:89] found id: ""
	I0130 20:48:10.385552   45441 logs.go:276] 1 containers: [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06]
	I0130 20:48:10.385601   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.390397   45441 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:48:10.390433   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:48:10.832689   45441 logs.go:123] Gathering logs for dmesg ...
	I0130 20:48:10.832724   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:48:10.846560   45441 logs.go:123] Gathering logs for storage-provisioner [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06] ...
	I0130 20:48:10.846587   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:10.887801   45441 logs.go:123] Gathering logs for kube-apiserver [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481] ...
	I0130 20:48:10.887826   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:10.942977   45441 logs.go:123] Gathering logs for etcd [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15] ...
	I0130 20:48:10.943003   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:10.987642   45441 logs.go:123] Gathering logs for coredns [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb] ...
	I0130 20:48:10.987669   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:11.024934   45441 logs.go:123] Gathering logs for kube-scheduler [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7] ...
	I0130 20:48:11.024964   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:11.076336   45441 logs.go:123] Gathering logs for kube-proxy [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe] ...
	I0130 20:48:11.076373   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:11.127315   45441 logs.go:123] Gathering logs for kube-controller-manager [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed] ...
	I0130 20:48:11.127344   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:11.182944   45441 logs.go:123] Gathering logs for kubelet ...
	I0130 20:48:11.182974   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:48:11.276494   45441 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:48:11.276525   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:48:11.413186   45441 logs.go:123] Gathering logs for container status ...
	I0130 20:48:11.413213   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:48:13.960537   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:48:13.977332   45441 api_server.go:72] duration metric: took 4m8.11544723s to wait for apiserver process to appear ...
	I0130 20:48:13.977362   45441 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:48:13.977400   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:48:13.977466   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:48:14.025510   45441 cri.go:89] found id: "39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:14.025534   45441 cri.go:89] found id: ""
	I0130 20:48:14.025542   45441 logs.go:276] 1 containers: [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481]
	I0130 20:48:14.025593   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.030025   45441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:48:14.030103   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:48:14.070504   45441 cri.go:89] found id: "1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:14.070524   45441 cri.go:89] found id: ""
	I0130 20:48:14.070531   45441 logs.go:276] 1 containers: [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15]
	I0130 20:48:14.070577   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.074858   45441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:48:14.074928   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:48:14.110816   45441 cri.go:89] found id: "215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:14.110844   45441 cri.go:89] found id: ""
	I0130 20:48:14.110853   45441 logs.go:276] 1 containers: [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb]
	I0130 20:48:14.110912   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.114997   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:48:14.115079   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:48:14.169213   45441 cri.go:89] found id: "8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:14.169240   45441 cri.go:89] found id: ""
	I0130 20:48:14.169249   45441 logs.go:276] 1 containers: [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7]
	I0130 20:48:14.169305   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.173541   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:48:14.173607   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:48:14.210634   45441 cri.go:89] found id: "c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:14.210657   45441 cri.go:89] found id: ""
	I0130 20:48:14.210664   45441 logs.go:276] 1 containers: [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe]
	I0130 20:48:14.210717   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.215015   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:48:14.215074   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:48:14.258454   45441 cri.go:89] found id: "1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:14.258477   45441 cri.go:89] found id: ""
	I0130 20:48:14.258484   45441 logs.go:276] 1 containers: [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed]
	I0130 20:48:14.258532   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.262486   45441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:48:14.262537   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:48:14.302175   45441 cri.go:89] found id: ""
	I0130 20:48:14.302205   45441 logs.go:276] 0 containers: []
	W0130 20:48:14.302213   45441 logs.go:278] No container was found matching "kindnet"
	I0130 20:48:14.302218   45441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:48:14.302262   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:48:14.339497   45441 cri.go:89] found id: "f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:14.339523   45441 cri.go:89] found id: ""
	I0130 20:48:14.339533   45441 logs.go:276] 1 containers: [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06]
	I0130 20:48:14.339589   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.343954   45441 logs.go:123] Gathering logs for kube-apiserver [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481] ...
	I0130 20:48:14.343983   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:14.391168   45441 logs.go:123] Gathering logs for coredns [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb] ...
	I0130 20:48:14.391203   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:14.436713   45441 logs.go:123] Gathering logs for kube-proxy [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe] ...
	I0130 20:48:14.436743   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:14.473899   45441 logs.go:123] Gathering logs for kube-controller-manager [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed] ...
	I0130 20:48:14.473934   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:14.533733   45441 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:48:14.533763   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:48:14.924087   45441 logs.go:123] Gathering logs for container status ...
	I0130 20:48:14.924121   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:48:14.972652   45441 logs.go:123] Gathering logs for kubelet ...
	I0130 20:48:14.972684   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:48:15.074398   45441 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:48:15.074443   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:48:15.206993   45441 logs.go:123] Gathering logs for kube-scheduler [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7] ...
	I0130 20:48:15.207026   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:15.258807   45441 logs.go:123] Gathering logs for storage-provisioner [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06] ...
	I0130 20:48:15.258841   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:15.299162   45441 logs.go:123] Gathering logs for dmesg ...
	I0130 20:48:15.299209   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:48:15.315611   45441 logs.go:123] Gathering logs for etcd [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15] ...
	I0130 20:48:15.315643   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:17.859914   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:48:17.865483   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 200:
	ok
	I0130 20:48:17.866876   45441 api_server.go:141] control plane version: v1.28.4
	I0130 20:48:17.866899   45441 api_server.go:131] duration metric: took 3.889528289s to wait for apiserver health ...
	I0130 20:48:17.866910   45441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:48:17.866937   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:48:17.866992   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:48:17.907357   45441 cri.go:89] found id: "39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:17.907386   45441 cri.go:89] found id: ""
	I0130 20:48:17.907396   45441 logs.go:276] 1 containers: [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481]
	I0130 20:48:17.907461   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:17.911558   45441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:48:17.911617   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:48:17.948725   45441 cri.go:89] found id: "1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:17.948747   45441 cri.go:89] found id: ""
	I0130 20:48:17.948757   45441 logs.go:276] 1 containers: [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15]
	I0130 20:48:17.948819   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:17.953304   45441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:48:17.953365   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:48:17.994059   45441 cri.go:89] found id: "215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:17.994091   45441 cri.go:89] found id: ""
	I0130 20:48:17.994101   45441 logs.go:276] 1 containers: [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb]
	I0130 20:48:17.994158   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:17.998347   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:48:17.998402   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:48:18.047814   45441 cri.go:89] found id: "8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:18.047842   45441 cri.go:89] found id: ""
	I0130 20:48:18.047853   45441 logs.go:276] 1 containers: [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7]
	I0130 20:48:18.047914   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.052864   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:48:18.052927   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:48:18.091597   45441 cri.go:89] found id: "c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:18.091617   45441 cri.go:89] found id: ""
	I0130 20:48:18.091625   45441 logs.go:276] 1 containers: [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe]
	I0130 20:48:18.091680   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.095921   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:48:18.096034   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:48:18.146922   45441 cri.go:89] found id: "1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:18.146942   45441 cri.go:89] found id: ""
	I0130 20:48:18.146952   45441 logs.go:276] 1 containers: [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed]
	I0130 20:48:18.147002   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.156610   45441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:48:18.156671   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:48:18.209680   45441 cri.go:89] found id: ""
	I0130 20:48:18.209701   45441 logs.go:276] 0 containers: []
	W0130 20:48:18.209711   45441 logs.go:278] No container was found matching "kindnet"
	I0130 20:48:18.209716   45441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:48:18.209761   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:48:18.253810   45441 cri.go:89] found id: "f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:18.253834   45441 cri.go:89] found id: ""
	I0130 20:48:18.253841   45441 logs.go:276] 1 containers: [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06]
	I0130 20:48:18.253883   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.258404   45441 logs.go:123] Gathering logs for storage-provisioner [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06] ...
	I0130 20:48:18.258433   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:18.305088   45441 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:48:18.305117   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:48:18.629911   45441 logs.go:123] Gathering logs for container status ...
	I0130 20:48:18.629948   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:48:18.677758   45441 logs.go:123] Gathering logs for kubelet ...
	I0130 20:48:18.677787   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:48:18.779831   45441 logs.go:123] Gathering logs for dmesg ...
	I0130 20:48:18.779869   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:48:18.795995   45441 logs.go:123] Gathering logs for kube-apiserver [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481] ...
	I0130 20:48:18.796024   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:18.844003   45441 logs.go:123] Gathering logs for coredns [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb] ...
	I0130 20:48:18.844034   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:18.884617   45441 logs.go:123] Gathering logs for kube-scheduler [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7] ...
	I0130 20:48:18.884645   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:18.931556   45441 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:48:18.931591   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:48:19.066569   45441 logs.go:123] Gathering logs for etcd [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15] ...
	I0130 20:48:19.066606   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:19.115012   45441 logs.go:123] Gathering logs for kube-proxy [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe] ...
	I0130 20:48:19.115041   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:19.169107   45441 logs.go:123] Gathering logs for kube-controller-manager [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed] ...
	I0130 20:48:19.169137   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:21.731792   45441 system_pods.go:59] 8 kube-system pods found
	I0130 20:48:21.731816   45441 system_pods.go:61] "coredns-5dd5756b68-tlb8h" [547c1fe4-3ef7-421a-b460-660a05caa2ab] Running
	I0130 20:48:21.731821   45441 system_pods.go:61] "etcd-default-k8s-diff-port-877742" [a8ff44ad-5fec-415b-a574-75bce55acf8e] Running
	I0130 20:48:21.731826   45441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-877742" [b183118a-5376-412c-a991-eaebf0e6a46e] Running
	I0130 20:48:21.731830   45441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-877742" [cd5170b0-7d1c-45fd-9670-376d04e7016b] Running
	I0130 20:48:21.731834   45441 system_pods.go:61] "kube-proxy-59zvd" [ca6ef754-0898-4e1d-9ff2-9f42f456db6c] Running
	I0130 20:48:21.731838   45441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-877742" [5870d68e-b7af-408b-9484-a7e414bbe7f7] Running
	I0130 20:48:21.731845   45441 system_pods.go:61] "metrics-server-57f55c9bc5-xjc2m" [7b9a273b-d328-4ae8-925e-5bb305cfe574] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:48:21.731853   45441 system_pods.go:61] "storage-provisioner" [db1a28e4-0c45-496e-a566-32a402b0841d] Running
	I0130 20:48:21.731862   45441 system_pods.go:74] duration metric: took 3.864945598s to wait for pod list to return data ...
	I0130 20:48:21.731878   45441 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:48:21.734586   45441 default_sa.go:45] found service account: "default"
	I0130 20:48:21.734604   45441 default_sa.go:55] duration metric: took 2.721611ms for default service account to be created ...
	I0130 20:48:21.734611   45441 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:48:21.740794   45441 system_pods.go:86] 8 kube-system pods found
	I0130 20:48:21.740817   45441 system_pods.go:89] "coredns-5dd5756b68-tlb8h" [547c1fe4-3ef7-421a-b460-660a05caa2ab] Running
	I0130 20:48:21.740822   45441 system_pods.go:89] "etcd-default-k8s-diff-port-877742" [a8ff44ad-5fec-415b-a574-75bce55acf8e] Running
	I0130 20:48:21.740827   45441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-877742" [b183118a-5376-412c-a991-eaebf0e6a46e] Running
	I0130 20:48:21.740831   45441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-877742" [cd5170b0-7d1c-45fd-9670-376d04e7016b] Running
	I0130 20:48:21.740835   45441 system_pods.go:89] "kube-proxy-59zvd" [ca6ef754-0898-4e1d-9ff2-9f42f456db6c] Running
	I0130 20:48:21.740840   45441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-877742" [5870d68e-b7af-408b-9484-a7e414bbe7f7] Running
	I0130 20:48:21.740846   45441 system_pods.go:89] "metrics-server-57f55c9bc5-xjc2m" [7b9a273b-d328-4ae8-925e-5bb305cfe574] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:48:21.740853   45441 system_pods.go:89] "storage-provisioner" [db1a28e4-0c45-496e-a566-32a402b0841d] Running
	I0130 20:48:21.740860   45441 system_pods.go:126] duration metric: took 6.244006ms to wait for k8s-apps to be running ...
	I0130 20:48:21.740867   45441 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:48:21.740906   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:48:21.756380   45441 system_svc.go:56] duration metric: took 15.505755ms WaitForService to wait for kubelet.
	I0130 20:48:21.756405   45441 kubeadm.go:581] duration metric: took 4m15.894523943s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:48:21.756429   45441 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:48:21.759579   45441 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:48:21.759605   45441 node_conditions.go:123] node cpu capacity is 2
	I0130 20:48:21.759616   45441 node_conditions.go:105] duration metric: took 3.182491ms to run NodePressure ...
	I0130 20:48:21.759626   45441 start.go:228] waiting for startup goroutines ...
	I0130 20:48:21.759632   45441 start.go:233] waiting for cluster config update ...
	I0130 20:48:21.759642   45441 start.go:242] writing updated cluster config ...
	I0130 20:48:21.759879   45441 ssh_runner.go:195] Run: rm -f paused
	I0130 20:48:21.808471   45441 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 20:48:21.810628   45441 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-877742" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 20:38:14 UTC, ends at Tue 2024-01-30 20:52:18 UTC. --
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.827222853Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706647937827209841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f4049594-d0f6-49c8-9c7b-248f1242ea18 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.828040565Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9cac185c-334c-4839-a60b-d4962827348d name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.828084640Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9cac185c-334c-4839-a60b-d4962827348d name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.828261336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac,PodSandboxId:ab9925835e346411b26bc8894ec94e416b909be80a6b1d371ffc7c4be7635601,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647161473504325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15108916-a630-4208-99f7-5706db407b22,},Annotations:map[string]string{io.kubernetes.container.hash: 40a6b532,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb4953867b0d06ea86a5c05932a01453ca0ed667a443bdf9ede0606f1821bb9,PodSandboxId:0c8049b581240989535266df9a54a3c8b0139ff64661303bb79927b4e76bf48e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706647140868168618,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 689c9651-345a-43fd-aa34-90f6d5e6af09,},Annotations:map[string]string{io.kubernetes.container.hash: ec603722,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d,PodSandboxId:66b3a844ac9d2844629589a74faba10a47448c961a1e3a1c9f27a470b7ab5f7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706647137891660189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jqzzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59f362b6-606e-4bcd-b5eb-c8822aaf8b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 923a1a71,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5,PodSandboxId:ab9925835e346411b26bc8894ec94e416b909be80a6b1d371ffc7c4be7635601,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706647130198167780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 15108916-a630-4208-99f7-5706db407b22,},Annotations:map[string]string{io.kubernetes.container.hash: 40a6b532,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254,PodSandboxId:b9919313ba9b5930a3c49678e0e22bd83083ba0e16b63fc272fc247d8caa1a6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706647130140498010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g7q5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47f109e0-7a
56-472f-8c7e-ba2b138de352,},Annotations:map[string]string{io.kubernetes.container.hash: 396cdb76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18,PodSandboxId:a84f96548609d7037f5403820927cdeba2fb19dee949b6dc469a39c510bda8f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706647123760271132,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3263ac53d6b91bfa78c53088de606433,},Annotations:map[string
]string{io.kubernetes.container.hash: 42a47b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f,PodSandboxId:4fb4b82b20065edcb49c98a7ee285d373ebcf0ea192cb88232862c8e887166f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706647123516304267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0e3c20f03b0f0b3970d7212f3c0b776,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2,PodSandboxId:449d84e5ef66c8dc96ece0f76be94bbb4a99f48e32ea3cde50e251dca3e7a670,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706647123431731822,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8209177f62ae28e095966ad6f0cbbaa0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d,PodSandboxId:74457383bf69a71606341c6e8c2b0a0f1f7a82460f41cc7a2168177a7c019a1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706647123178606099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e99eedee0b4268817b10691671423352,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 94585cf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9cac185c-334c-4839-a60b-d4962827348d name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.867997010Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=971a7ff8-bb30-4384-bbcd-8f97bd425378 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.868089176Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=971a7ff8-bb30-4384-bbcd-8f97bd425378 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.869198891Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ca9251fe-c52f-4f90-a322-1102515d79a9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.869541567Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706647937869528336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ca9251fe-c52f-4f90-a322-1102515d79a9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.870251688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=757a757f-d9ae-4207-9ee2-02073aa10694 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.870326407Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=757a757f-d9ae-4207-9ee2-02073aa10694 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.870507682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac,PodSandboxId:ab9925835e346411b26bc8894ec94e416b909be80a6b1d371ffc7c4be7635601,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647161473504325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15108916-a630-4208-99f7-5706db407b22,},Annotations:map[string]string{io.kubernetes.container.hash: 40a6b532,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb4953867b0d06ea86a5c05932a01453ca0ed667a443bdf9ede0606f1821bb9,PodSandboxId:0c8049b581240989535266df9a54a3c8b0139ff64661303bb79927b4e76bf48e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706647140868168618,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 689c9651-345a-43fd-aa34-90f6d5e6af09,},Annotations:map[string]string{io.kubernetes.container.hash: ec603722,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d,PodSandboxId:66b3a844ac9d2844629589a74faba10a47448c961a1e3a1c9f27a470b7ab5f7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706647137891660189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jqzzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59f362b6-606e-4bcd-b5eb-c8822aaf8b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 923a1a71,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5,PodSandboxId:ab9925835e346411b26bc8894ec94e416b909be80a6b1d371ffc7c4be7635601,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706647130198167780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 15108916-a630-4208-99f7-5706db407b22,},Annotations:map[string]string{io.kubernetes.container.hash: 40a6b532,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254,PodSandboxId:b9919313ba9b5930a3c49678e0e22bd83083ba0e16b63fc272fc247d8caa1a6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706647130140498010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g7q5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47f109e0-7a
56-472f-8c7e-ba2b138de352,},Annotations:map[string]string{io.kubernetes.container.hash: 396cdb76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18,PodSandboxId:a84f96548609d7037f5403820927cdeba2fb19dee949b6dc469a39c510bda8f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706647123760271132,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3263ac53d6b91bfa78c53088de606433,},Annotations:map[string
]string{io.kubernetes.container.hash: 42a47b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f,PodSandboxId:4fb4b82b20065edcb49c98a7ee285d373ebcf0ea192cb88232862c8e887166f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706647123516304267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0e3c20f03b0f0b3970d7212f3c0b776,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2,PodSandboxId:449d84e5ef66c8dc96ece0f76be94bbb4a99f48e32ea3cde50e251dca3e7a670,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706647123431731822,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8209177f62ae28e095966ad6f0cbbaa0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d,PodSandboxId:74457383bf69a71606341c6e8c2b0a0f1f7a82460f41cc7a2168177a7c019a1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706647123178606099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e99eedee0b4268817b10691671423352,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 94585cf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=757a757f-d9ae-4207-9ee2-02073aa10694 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.916453564Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=cf754a88-3487-4bcb-a751-5796a9eeea07 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.916598092Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cf754a88-3487-4bcb-a751-5796a9eeea07 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.918365657Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=45fad63b-dd8d-4af8-b861-3b6ec5bccc39 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.919143130Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706647937919121003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=45fad63b-dd8d-4af8-b861-3b6ec5bccc39 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.919652529Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=275ea0fd-49b0-477d-9e9d-e9cfe63d808d name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.919716404Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=275ea0fd-49b0-477d-9e9d-e9cfe63d808d name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.920037150Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac,PodSandboxId:ab9925835e346411b26bc8894ec94e416b909be80a6b1d371ffc7c4be7635601,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647161473504325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15108916-a630-4208-99f7-5706db407b22,},Annotations:map[string]string{io.kubernetes.container.hash: 40a6b532,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb4953867b0d06ea86a5c05932a01453ca0ed667a443bdf9ede0606f1821bb9,PodSandboxId:0c8049b581240989535266df9a54a3c8b0139ff64661303bb79927b4e76bf48e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706647140868168618,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 689c9651-345a-43fd-aa34-90f6d5e6af09,},Annotations:map[string]string{io.kubernetes.container.hash: ec603722,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d,PodSandboxId:66b3a844ac9d2844629589a74faba10a47448c961a1e3a1c9f27a470b7ab5f7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706647137891660189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jqzzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59f362b6-606e-4bcd-b5eb-c8822aaf8b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 923a1a71,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5,PodSandboxId:ab9925835e346411b26bc8894ec94e416b909be80a6b1d371ffc7c4be7635601,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706647130198167780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 15108916-a630-4208-99f7-5706db407b22,},Annotations:map[string]string{io.kubernetes.container.hash: 40a6b532,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254,PodSandboxId:b9919313ba9b5930a3c49678e0e22bd83083ba0e16b63fc272fc247d8caa1a6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706647130140498010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g7q5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47f109e0-7a
56-472f-8c7e-ba2b138de352,},Annotations:map[string]string{io.kubernetes.container.hash: 396cdb76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18,PodSandboxId:a84f96548609d7037f5403820927cdeba2fb19dee949b6dc469a39c510bda8f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706647123760271132,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3263ac53d6b91bfa78c53088de606433,},Annotations:map[string
]string{io.kubernetes.container.hash: 42a47b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f,PodSandboxId:4fb4b82b20065edcb49c98a7ee285d373ebcf0ea192cb88232862c8e887166f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706647123516304267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0e3c20f03b0f0b3970d7212f3c0b776,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2,PodSandboxId:449d84e5ef66c8dc96ece0f76be94bbb4a99f48e32ea3cde50e251dca3e7a670,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706647123431731822,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8209177f62ae28e095966ad6f0cbbaa0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d,PodSandboxId:74457383bf69a71606341c6e8c2b0a0f1f7a82460f41cc7a2168177a7c019a1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706647123178606099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e99eedee0b4268817b10691671423352,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 94585cf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=275ea0fd-49b0-477d-9e9d-e9cfe63d808d name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.970551218Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f20bd400-5e04-45e2-bc34-947260b6ce9d name=/runtime.v1.RuntimeService/Version
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.970608075Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f20bd400-5e04-45e2-bc34-947260b6ce9d name=/runtime.v1.RuntimeService/Version
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.971736125Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=51d216c8-ec30-414f-b90e-8242fc70fda1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.972186372Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706647937972173911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=51d216c8-ec30-414f-b90e-8242fc70fda1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.972871045Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=626ed693-cd2c-4aba-82ac-970415f474e1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.972971146Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=626ed693-cd2c-4aba-82ac-970415f474e1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:52:17 embed-certs-208583 crio[720]: time="2024-01-30 20:52:17.973143927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac,PodSandboxId:ab9925835e346411b26bc8894ec94e416b909be80a6b1d371ffc7c4be7635601,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647161473504325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15108916-a630-4208-99f7-5706db407b22,},Annotations:map[string]string{io.kubernetes.container.hash: 40a6b532,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb4953867b0d06ea86a5c05932a01453ca0ed667a443bdf9ede0606f1821bb9,PodSandboxId:0c8049b581240989535266df9a54a3c8b0139ff64661303bb79927b4e76bf48e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706647140868168618,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 689c9651-345a-43fd-aa34-90f6d5e6af09,},Annotations:map[string]string{io.kubernetes.container.hash: ec603722,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d,PodSandboxId:66b3a844ac9d2844629589a74faba10a47448c961a1e3a1c9f27a470b7ab5f7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706647137891660189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jqzzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59f362b6-606e-4bcd-b5eb-c8822aaf8b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 923a1a71,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5,PodSandboxId:ab9925835e346411b26bc8894ec94e416b909be80a6b1d371ffc7c4be7635601,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706647130198167780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 15108916-a630-4208-99f7-5706db407b22,},Annotations:map[string]string{io.kubernetes.container.hash: 40a6b532,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254,PodSandboxId:b9919313ba9b5930a3c49678e0e22bd83083ba0e16b63fc272fc247d8caa1a6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706647130140498010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g7q5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47f109e0-7a
56-472f-8c7e-ba2b138de352,},Annotations:map[string]string{io.kubernetes.container.hash: 396cdb76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18,PodSandboxId:a84f96548609d7037f5403820927cdeba2fb19dee949b6dc469a39c510bda8f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706647123760271132,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3263ac53d6b91bfa78c53088de606433,},Annotations:map[string
]string{io.kubernetes.container.hash: 42a47b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f,PodSandboxId:4fb4b82b20065edcb49c98a7ee285d373ebcf0ea192cb88232862c8e887166f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706647123516304267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0e3c20f03b0f0b3970d7212f3c0b776,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2,PodSandboxId:449d84e5ef66c8dc96ece0f76be94bbb4a99f48e32ea3cde50e251dca3e7a670,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706647123431731822,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8209177f62ae28e095966ad6f0cbbaa0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d,PodSandboxId:74457383bf69a71606341c6e8c2b0a0f1f7a82460f41cc7a2168177a7c019a1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706647123178606099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e99eedee0b4268817b10691671423352,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 94585cf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=626ed693-cd2c-4aba-82ac-970415f474e1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	84ab3bb4fc327       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   ab9925835e346       storage-provisioner
	bdb4953867b0d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   0c8049b581240       busybox
	4c08f1c12145a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   66b3a844ac9d2       coredns-5dd5756b68-jqzzv
	5dbd1a278b495       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   ab9925835e346       storage-provisioner
	cceda50230a0f       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   b9919313ba9b5       kube-proxy-g7q5t
	0684f62c32df0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   a84f96548609d       etcd-embed-certs-208583
	74b99df1e69b6       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   4fb4b82b20065       kube-scheduler-embed-certs-208583
	b53924cf08f0c       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   449d84e5ef66c       kube-controller-manager-embed-certs-208583
	f2b510da3b115       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   74457383bf69a       kube-apiserver-embed-certs-208583
	
	
	==> coredns [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38716 - 44587 "HINFO IN 9201679870384010574.7855106596069656275. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012753039s
	
	
	==> describe nodes <==
	Name:               embed-certs-208583
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-208583
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218
	                    minikube.k8s.io/name=embed-certs-208583
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T20_29_55_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 20:29:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-208583
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 20:52:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 20:49:31 +0000   Tue, 30 Jan 2024 20:29:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 20:49:31 +0000   Tue, 30 Jan 2024 20:29:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 20:49:31 +0000   Tue, 30 Jan 2024 20:29:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 20:49:31 +0000   Tue, 30 Jan 2024 20:38:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.63
	  Hostname:    embed-certs-208583
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 bdb5105259974561b918af369df02796
	  System UUID:                bdb51052-5997-4561-b918-af369df02796
	  Boot ID:                    ab0320e5-8c2d-4df3-b351-d7c99f8ce415
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5dd5756b68-jqzzv                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-embed-certs-208583                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-embed-certs-208583             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-embed-certs-208583    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-g7q5t                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-embed-certs-208583             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-57f55c9bc5-ghg9n               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-208583 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-208583 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-208583 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                22m                kubelet          Node embed-certs-208583 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-208583 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-208583 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-208583 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-208583 event: Registered Node embed-certs-208583 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-208583 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-208583 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-208583 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-208583 event: Registered Node embed-certs-208583 in Controller
	
	
	==> dmesg <==
	[Jan30 20:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066069] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.339494] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.214283] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +0.145036] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.482974] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.098762] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.116494] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.143886] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.127771] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.222496] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[ +17.333923] systemd-fstab-generator[920]: Ignoring "noauto" for root device
	[ +15.290435] kauditd_printk_skb: 19 callbacks suppressed
	[Jan30 20:39] hrtimer: interrupt took 2691671 ns
	
	
	==> etcd [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18] <==
	{"level":"info","ts":"2024-01-30T20:38:54.032169Z","caller":"traceutil/trace.go:171","msg":"trace[231808716] linearizableReadLoop","detail":"{readStateIndex:611; appliedIndex:610; }","duration":"674.192751ms","start":"2024-01-30T20:38:53.357966Z","end":"2024-01-30T20:38:54.032159Z","steps":["trace[231808716] 'read index received'  (duration: 303.148044ms)","trace[231808716] 'applied index is now lower than readState.Index'  (duration: 371.043695ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-30T20:38:54.032224Z","caller":"traceutil/trace.go:171","msg":"trace[1666154603] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"777.811591ms","start":"2024-01-30T20:38:53.254406Z","end":"2024-01-30T20:38:54.032218Z","steps":["trace[1666154603] 'process raft request'  (duration: 406.765408ms)","trace[1666154603] 'compare'  (duration: 370.400325ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-30T20:38:54.032266Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-30T20:38:53.254391Z","time spent":"777.848075ms","remote":"127.0.0.1:35170","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":747,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/busybox.17af3a85a30eb5f1\" mod_revision:567 > success:<request_put:<key:\"/registry/events/default/busybox.17af3a85a30eb5f1\" value_size:680 lease:8533069345670915945 >> failure:<request_range:<key:\"/registry/events/default/busybox.17af3a85a30eb5f1\" > >"}
	{"level":"warn","ts":"2024-01-30T20:38:54.032468Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"420.391952ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:721"}
	{"level":"warn","ts":"2024-01-30T20:38:54.032549Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"312.86655ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-208583\" ","response":"range_response_count:1 size:5677"}
	{"level":"info","ts":"2024-01-30T20:38:54.0326Z","caller":"traceutil/trace.go:171","msg":"trace[661698520] range","detail":"{range_begin:/registry/minions/embed-certs-208583; range_end:; response_count:1; response_revision:572; }","duration":"312.917115ms","start":"2024-01-30T20:38:53.719675Z","end":"2024-01-30T20:38:54.032592Z","steps":["trace[661698520] 'agreement among raft nodes before linearized reading'  (duration: 312.842355ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T20:38:54.032624Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-30T20:38:53.719659Z","time spent":"312.958838ms","remote":"127.0.0.1:35192","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5700,"request content":"key:\"/registry/minions/embed-certs-208583\" "}
	{"level":"warn","ts":"2024-01-30T20:38:54.032739Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"674.791963ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:1 size:992"}
	{"level":"info","ts":"2024-01-30T20:38:54.032945Z","caller":"traceutil/trace.go:171","msg":"trace[1853740171] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:1; response_revision:572; }","duration":"674.939797ms","start":"2024-01-30T20:38:53.357942Z","end":"2024-01-30T20:38:54.032882Z","steps":["trace[1853740171] 'agreement among raft nodes before linearized reading'  (duration: 674.77413ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T20:38:54.032972Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-30T20:38:53.357927Z","time spent":"675.037768ms","remote":"127.0.0.1:35236","response type":"/etcdserverpb.KV/Range","request count":0,"request size":35,"response count":1,"response size":1015,"request content":"key:\"/registry/storageclasses/standard\" "}
	{"level":"info","ts":"2024-01-30T20:38:54.03255Z","caller":"traceutil/trace.go:171","msg":"trace[1336100563] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:572; }","duration":"420.512097ms","start":"2024-01-30T20:38:53.612024Z","end":"2024-01-30T20:38:54.032536Z","steps":["trace[1336100563] 'agreement among raft nodes before linearized reading'  (duration: 420.321252ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T20:38:54.033078Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-30T20:38:53.61201Z","time spent":"421.062331ms","remote":"127.0.0.1:35198","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":1,"response size":744,"request content":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" "}
	{"level":"warn","ts":"2024-01-30T20:38:54.035593Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"411.041529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" ","response":"range_response_count:1 size:2362"}
	{"level":"info","ts":"2024-01-30T20:38:54.035651Z","caller":"traceutil/trace.go:171","msg":"trace[1509354512] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:1; response_revision:572; }","duration":"411.103585ms","start":"2024-01-30T20:38:53.624539Z","end":"2024-01-30T20:38:54.035643Z","steps":["trace[1509354512] 'agreement among raft nodes before linearized reading'  (duration: 408.028832ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T20:38:54.035676Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-30T20:38:53.624523Z","time spent":"411.146567ms","remote":"127.0.0.1:35270","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":2385,"request content":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" "}
	{"level":"warn","ts":"2024-01-30T20:38:54.672398Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"452.239978ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-208583\" ","response":"range_response_count:1 size:5677"}
	{"level":"info","ts":"2024-01-30T20:38:54.672702Z","caller":"traceutil/trace.go:171","msg":"trace[121966225] range","detail":"{range_begin:/registry/minions/embed-certs-208583; range_end:; response_count:1; response_revision:572; }","duration":"452.560412ms","start":"2024-01-30T20:38:54.220123Z","end":"2024-01-30T20:38:54.672684Z","steps":["trace[121966225] 'range keys from in-memory index tree'  (duration: 452.125387ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T20:38:54.672963Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-30T20:38:54.220107Z","time spent":"452.830486ms","remote":"127.0.0.1:35192","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5700,"request content":"key:\"/registry/minions/embed-certs-208583\" "}
	{"level":"info","ts":"2024-01-30T20:39:34.507364Z","caller":"traceutil/trace.go:171","msg":"trace[831219265] linearizableReadLoop","detail":"{readStateIndex:680; appliedIndex:679; }","duration":"224.134605ms","start":"2024-01-30T20:39:34.283196Z","end":"2024-01-30T20:39:34.507331Z","steps":["trace[831219265] 'read index received'  (duration: 205.240065ms)","trace[831219265] 'applied index is now lower than readState.Index'  (duration: 18.89331ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-30T20:39:34.507542Z","caller":"traceutil/trace.go:171","msg":"trace[1371141812] transaction","detail":"{read_only:false; response_revision:633; number_of_response:1; }","duration":"231.2881ms","start":"2024-01-30T20:39:34.276241Z","end":"2024-01-30T20:39:34.507529Z","steps":["trace[1371141812] 'process raft request'  (duration: 212.235642ms)","trace[1371141812] 'compare'  (duration: 18.587527ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-30T20:39:34.507963Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.768116ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-ghg9n\" ","response":"range_response_count:1 size:4026"}
	{"level":"info","ts":"2024-01-30T20:39:34.508032Z","caller":"traceutil/trace.go:171","msg":"trace[53200355] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-ghg9n; range_end:; response_count:1; response_revision:633; }","duration":"224.848006ms","start":"2024-01-30T20:39:34.283177Z","end":"2024-01-30T20:39:34.508025Z","steps":["trace[53200355] 'agreement among raft nodes before linearized reading'  (duration: 224.701048ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T20:48:46.888016Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":851}
	{"level":"info","ts":"2024-01-30T20:48:46.891215Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":851,"took":"2.864276ms","hash":341810038}
	{"level":"info","ts":"2024-01-30T20:48:46.891278Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":341810038,"revision":851,"compact-revision":-1}
	
	
	==> kernel <==
	 20:52:18 up 14 min,  0 users,  load average: 0.10, 0.17, 0.10
	Linux embed-certs-208583 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d] <==
	I0130 20:48:48.820976       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 20:48:49.821417       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:48:49.821480       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 20:48:49.821491       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:48:49.821441       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:48:49.821683       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:48:49.823047       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 20:49:48.590049       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 20:49:49.822641       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:49:49.822735       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 20:49:49.822820       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:49:49.824037       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:49:49.824127       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:49:49.824138       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 20:50:48.590478       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0130 20:51:48.590175       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 20:51:49.823541       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:51:49.823630       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 20:51:49.823643       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:51:49.824710       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:51:49.824930       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:51:49.824977       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2] <==
	I0130 20:46:31.957943       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:47:01.471200       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:47:01.967832       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:47:31.475568       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:47:31.975298       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:48:01.482266       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:48:01.984645       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:48:31.487959       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:48:31.992743       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:49:01.496943       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:49:02.001602       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:49:31.504865       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:49:32.010266       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0130 20:49:53.248302       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="575.794µs"
	E0130 20:50:01.512985       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:50:02.020622       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0130 20:50:04.259739       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="317.338µs"
	E0130 20:50:31.518949       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:50:32.029721       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:51:01.524703       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:51:02.044241       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:51:31.529896       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:51:32.054536       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:52:01.537316       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:52:02.066199       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254] <==
	I0130 20:38:50.535555       1 server_others.go:69] "Using iptables proxy"
	I0130 20:38:50.555469       1 node.go:141] Successfully retrieved node IP: 192.168.61.63
	I0130 20:38:50.706519       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0130 20:38:50.706586       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0130 20:38:50.712720       1 server_others.go:152] "Using iptables Proxier"
	I0130 20:38:50.712844       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0130 20:38:50.713031       1 server.go:846] "Version info" version="v1.28.4"
	I0130 20:38:50.713091       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 20:38:50.715309       1 config.go:188] "Starting service config controller"
	I0130 20:38:50.715350       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0130 20:38:50.715391       1 config.go:97] "Starting endpoint slice config controller"
	I0130 20:38:50.715395       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0130 20:38:50.715586       1 config.go:315] "Starting node config controller"
	I0130 20:38:50.715592       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0130 20:38:50.818690       1 shared_informer.go:318] Caches are synced for node config
	I0130 20:38:50.818863       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0130 20:38:50.818936       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f] <==
	I0130 20:38:45.611826       1 serving.go:348] Generated self-signed cert in-memory
	W0130 20:38:48.697259       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0130 20:38:48.697415       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 20:38:48.697457       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0130 20:38:48.697488       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0130 20:38:48.858349       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0130 20:38:48.858463       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 20:38:48.862959       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0130 20:38:48.863031       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0130 20:38:48.864077       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0130 20:38:48.864316       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0130 20:38:48.964636       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 20:38:14 UTC, ends at Tue 2024-01-30 20:52:18 UTC. --
	Jan 30 20:49:38 embed-certs-208583 kubelet[926]: E0130 20:49:38.251117     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:49:42 embed-certs-208583 kubelet[926]: E0130 20:49:42.245357     926 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 20:49:42 embed-certs-208583 kubelet[926]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 20:49:42 embed-certs-208583 kubelet[926]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:49:42 embed-certs-208583 kubelet[926]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 20:49:53 embed-certs-208583 kubelet[926]: E0130 20:49:53.228956     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:50:04 embed-certs-208583 kubelet[926]: E0130 20:50:04.232876     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:50:16 embed-certs-208583 kubelet[926]: E0130 20:50:16.230538     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:50:29 embed-certs-208583 kubelet[926]: E0130 20:50:29.228708     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:50:40 embed-certs-208583 kubelet[926]: E0130 20:50:40.229680     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:50:42 embed-certs-208583 kubelet[926]: E0130 20:50:42.243893     926 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 20:50:42 embed-certs-208583 kubelet[926]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 20:50:42 embed-certs-208583 kubelet[926]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:50:42 embed-certs-208583 kubelet[926]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 20:50:55 embed-certs-208583 kubelet[926]: E0130 20:50:55.228998     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:51:07 embed-certs-208583 kubelet[926]: E0130 20:51:07.229024     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:51:20 embed-certs-208583 kubelet[926]: E0130 20:51:20.231015     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:51:35 embed-certs-208583 kubelet[926]: E0130 20:51:35.228852     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:51:42 embed-certs-208583 kubelet[926]: E0130 20:51:42.242658     926 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 20:51:42 embed-certs-208583 kubelet[926]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 20:51:42 embed-certs-208583 kubelet[926]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:51:42 embed-certs-208583 kubelet[926]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 20:51:49 embed-certs-208583 kubelet[926]: E0130 20:51:49.229043     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:52:04 embed-certs-208583 kubelet[926]: E0130 20:52:04.229293     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:52:17 embed-certs-208583 kubelet[926]: E0130 20:52:17.228289     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	
	
	==> storage-provisioner [5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5] <==
	I0130 20:38:50.486906       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0130 20:39:20.489282       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac] <==
	I0130 20:39:21.621208       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 20:39:21.642033       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 20:39:21.642222       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 20:39:39.045827       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 20:39:39.045980       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-208583_e5da18ec-ba4a-443b-98b6-d4f3cc1af7e8!
	I0130 20:39:39.047893       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d2a5e740-c445-4dba-b408-fd63b3f21abd", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-208583_e5da18ec-ba4a-443b-98b6-d4f3cc1af7e8 became leader
	I0130 20:39:39.146943       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-208583_e5da18ec-ba4a-443b-98b6-d4f3cc1af7e8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-208583 -n embed-certs-208583
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-208583 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-ghg9n
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-208583 describe pod metrics-server-57f55c9bc5-ghg9n
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-208583 describe pod metrics-server-57f55c9bc5-ghg9n: exit status 1 (66.554722ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-ghg9n" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-208583 describe pod metrics-server-57f55c9bc5-ghg9n: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0130 20:45:02.760504   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-473743 -n no-preload-473743
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-30 20:53:36.14205798 +0000 UTC m=+5453.219027750
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-473743 -n no-preload-473743
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-473743 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-473743 logs -n 25: (1.759811397s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:28 UTC | 30 Jan 24 20:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:28 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| pause   | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-757744 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | disable-driver-mounts-757744                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:31 UTC |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-473743             | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-473743                                   | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-208583            | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:31 UTC | 30 Jan 24 20:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:31 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-877742  | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:32 UTC | 30 Jan 24 20:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:32 UTC |                     |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-473743                  | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-208583                 | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-473743                                   | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:44 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-150971        | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-877742       | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:34 UTC | 30 Jan 24 20:48 UTC |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-150971             | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:36 UTC | 30 Jan 24 20:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 20:36:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 20:36:09.643751   45819 out.go:296] Setting OutFile to fd 1 ...
	I0130 20:36:09.644027   45819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:36:09.644038   45819 out.go:309] Setting ErrFile to fd 2...
	I0130 20:36:09.644045   45819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:36:09.644230   45819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 20:36:09.644766   45819 out.go:303] Setting JSON to false
	I0130 20:36:09.645668   45819 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4715,"bootTime":1706642255,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 20:36:09.645727   45819 start.go:138] virtualization: kvm guest
	I0130 20:36:09.648102   45819 out.go:177] * [old-k8s-version-150971] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 20:36:09.649772   45819 out.go:177]   - MINIKUBE_LOCATION=18007
	I0130 20:36:09.651000   45819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 20:36:09.649826   45819 notify.go:220] Checking for updates...
	I0130 20:36:09.653462   45819 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:36:09.654761   45819 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 20:36:09.655939   45819 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 20:36:09.657140   45819 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 20:36:09.658638   45819 config.go:182] Loaded profile config "old-k8s-version-150971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 20:36:09.659027   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:36:09.659066   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:36:09.672985   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39323
	I0130 20:36:09.673381   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:36:09.673876   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:36:09.673897   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:36:09.674191   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:36:09.674351   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:36:09.676038   45819 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0130 20:36:09.677315   45819 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 20:36:09.677582   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:36:09.677630   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:36:09.691259   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0130 20:36:09.691604   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:36:09.692060   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:36:09.692089   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:36:09.692371   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:36:09.692555   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:36:09.726172   45819 out.go:177] * Using the kvm2 driver based on existing profile
	I0130 20:36:09.727421   45819 start.go:298] selected driver: kvm2
	I0130 20:36:09.727433   45819 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-150971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:36:09.727546   45819 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 20:36:09.728186   45819 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 20:36:09.728255   45819 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18007-4458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 20:36:09.742395   45819 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 20:36:09.742715   45819 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 20:36:09.742771   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:36:09.742784   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:36:09.742794   45819 start_flags.go:321] config:
	{Name:old-k8s-version-150971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:36:09.742977   45819 iso.go:125] acquiring lock: {Name:mk072ab123730f3058e85a91672f85e887bd47af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 20:36:09.745577   45819 out.go:177] * Starting control plane node old-k8s-version-150971 in cluster old-k8s-version-150971
	I0130 20:36:10.483495   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:09.746820   45819 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 20:36:09.746852   45819 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0130 20:36:09.746865   45819 cache.go:56] Caching tarball of preloaded images
	I0130 20:36:09.746951   45819 preload.go:174] Found /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 20:36:09.746960   45819 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0130 20:36:09.747061   45819 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/config.json ...
	I0130 20:36:09.747229   45819 start.go:365] acquiring machines lock for old-k8s-version-150971: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 20:36:13.555547   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:19.635533   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:22.707498   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:28.787473   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:31.859544   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:37.939524   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:41.011456   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:47.091510   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:50.163505   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:56.243497   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:59.315474   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:05.395536   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:08.467514   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:14.547517   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:17.619561   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:23.699509   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:26.771568   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:32.851483   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:35.923502   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:42.003515   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:45.075526   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:51.155512   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:54.227514   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:38:00.307532   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:38:03.311451   45037 start.go:369] acquired machines lock for "embed-certs-208583" in 4m29.471089592s
	I0130 20:38:03.311507   45037 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:38:03.311514   45037 fix.go:54] fixHost starting: 
	I0130 20:38:03.311893   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:03.311933   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:03.326477   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0130 20:38:03.326949   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:03.327373   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:03.327403   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:03.327758   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:03.327946   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:03.328115   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:03.329604   45037 fix.go:102] recreateIfNeeded on embed-certs-208583: state=Stopped err=<nil>
	I0130 20:38:03.329646   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	W0130 20:38:03.329810   45037 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:38:03.331493   45037 out.go:177] * Restarting existing kvm2 VM for "embed-certs-208583" ...
	I0130 20:38:03.332735   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Start
	I0130 20:38:03.332862   45037 main.go:141] libmachine: (embed-certs-208583) Ensuring networks are active...
	I0130 20:38:03.333514   45037 main.go:141] libmachine: (embed-certs-208583) Ensuring network default is active
	I0130 20:38:03.333859   45037 main.go:141] libmachine: (embed-certs-208583) Ensuring network mk-embed-certs-208583 is active
	I0130 20:38:03.334154   45037 main.go:141] libmachine: (embed-certs-208583) Getting domain xml...
	I0130 20:38:03.334860   45037 main.go:141] libmachine: (embed-certs-208583) Creating domain...
	I0130 20:38:03.309254   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:38:03.309293   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:38:03.311318   44923 machine.go:91] provisioned docker machine in 4m37.382925036s
	I0130 20:38:03.311359   44923 fix.go:56] fixHost completed within 4m37.403399512s
	I0130 20:38:03.311364   44923 start.go:83] releasing machines lock for "no-preload-473743", held for 4m37.403435936s
	W0130 20:38:03.311387   44923 start.go:694] error starting host: provision: host is not running
	W0130 20:38:03.311504   44923 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0130 20:38:03.311518   44923 start.go:709] Will try again in 5 seconds ...
	I0130 20:38:04.507963   45037 main.go:141] libmachine: (embed-certs-208583) Waiting to get IP...
	I0130 20:38:04.508755   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:04.509133   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:04.509207   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:04.509115   46132 retry.go:31] will retry after 189.527185ms: waiting for machine to come up
	I0130 20:38:04.700560   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:04.701193   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:04.701223   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:04.701137   46132 retry.go:31] will retry after 239.29825ms: waiting for machine to come up
	I0130 20:38:04.941612   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:04.942080   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:04.942116   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:04.942040   46132 retry.go:31] will retry after 388.672579ms: waiting for machine to come up
	I0130 20:38:05.332617   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:05.333018   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:05.333041   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:05.332968   46132 retry.go:31] will retry after 525.5543ms: waiting for machine to come up
	I0130 20:38:05.859677   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:05.860094   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:05.860126   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:05.860055   46132 retry.go:31] will retry after 595.87535ms: waiting for machine to come up
	I0130 20:38:06.457828   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:06.458220   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:06.458244   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:06.458197   46132 retry.go:31] will retry after 766.148522ms: waiting for machine to come up
	I0130 20:38:07.226151   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:07.226615   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:07.226652   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:07.226558   46132 retry.go:31] will retry after 843.449223ms: waiting for machine to come up
	I0130 20:38:08.070983   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:08.071381   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:08.071407   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:08.071338   46132 retry.go:31] will retry after 1.079839146s: waiting for machine to come up
	I0130 20:38:08.313897   44923 start.go:365] acquiring machines lock for no-preload-473743: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 20:38:09.152768   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:09.153087   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:09.153113   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:09.153034   46132 retry.go:31] will retry after 1.855245571s: waiting for machine to come up
	I0130 20:38:11.010893   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:11.011260   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:11.011299   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:11.011196   46132 retry.go:31] will retry after 2.159062372s: waiting for machine to come up
	I0130 20:38:13.172734   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:13.173144   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:13.173173   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:13.173106   46132 retry.go:31] will retry after 2.73165804s: waiting for machine to come up
	I0130 20:38:15.908382   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:15.908803   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:15.908834   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:15.908732   46132 retry.go:31] will retry after 3.268718285s: waiting for machine to come up
	I0130 20:38:23.603972   45441 start.go:369] acquired machines lock for "default-k8s-diff-port-877742" in 3m48.064811183s
	I0130 20:38:23.604051   45441 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:38:23.604061   45441 fix.go:54] fixHost starting: 
	I0130 20:38:23.604420   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:23.604456   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:23.620189   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34493
	I0130 20:38:23.620538   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:23.621035   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:38:23.621073   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:23.621415   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:23.621584   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:23.621739   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:38:23.623158   45441 fix.go:102] recreateIfNeeded on default-k8s-diff-port-877742: state=Stopped err=<nil>
	I0130 20:38:23.623185   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	W0130 20:38:23.623382   45441 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:38:23.625974   45441 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-877742" ...
	I0130 20:38:19.178930   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:19.179358   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:19.179389   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:19.179300   46132 retry.go:31] will retry after 3.117969425s: waiting for machine to come up
	I0130 20:38:22.300539   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.300957   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has current primary IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.300982   45037 main.go:141] libmachine: (embed-certs-208583) Found IP for machine: 192.168.61.63
	I0130 20:38:22.300997   45037 main.go:141] libmachine: (embed-certs-208583) Reserving static IP address...
	I0130 20:38:22.301371   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "embed-certs-208583", mac: "52:54:00:43:f2:e1", ip: "192.168.61.63"} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.301395   45037 main.go:141] libmachine: (embed-certs-208583) Reserved static IP address: 192.168.61.63
	I0130 20:38:22.301409   45037 main.go:141] libmachine: (embed-certs-208583) DBG | skip adding static IP to network mk-embed-certs-208583 - found existing host DHCP lease matching {name: "embed-certs-208583", mac: "52:54:00:43:f2:e1", ip: "192.168.61.63"}
	I0130 20:38:22.301420   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Getting to WaitForSSH function...
	I0130 20:38:22.301436   45037 main.go:141] libmachine: (embed-certs-208583) Waiting for SSH to be available...
	I0130 20:38:22.303472   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.303820   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.303842   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.303968   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Using SSH client type: external
	I0130 20:38:22.304011   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa (-rw-------)
	I0130 20:38:22.304042   45037 main.go:141] libmachine: (embed-certs-208583) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:38:22.304052   45037 main.go:141] libmachine: (embed-certs-208583) DBG | About to run SSH command:
	I0130 20:38:22.304065   45037 main.go:141] libmachine: (embed-certs-208583) DBG | exit 0
	I0130 20:38:22.398610   45037 main.go:141] libmachine: (embed-certs-208583) DBG | SSH cmd err, output: <nil>: 
	I0130 20:38:22.398945   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetConfigRaw
	I0130 20:38:22.399605   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:22.402157   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.402531   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.402569   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.402759   45037 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/config.json ...
	I0130 20:38:22.402974   45037 machine.go:88] provisioning docker machine ...
	I0130 20:38:22.402999   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:22.403238   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetMachineName
	I0130 20:38:22.403440   45037 buildroot.go:166] provisioning hostname "embed-certs-208583"
	I0130 20:38:22.403462   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetMachineName
	I0130 20:38:22.403642   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.405694   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.406026   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.406055   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.406180   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:22.406429   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.406599   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.406734   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:22.406904   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:22.407422   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:22.407446   45037 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208583 && echo "embed-certs-208583" | sudo tee /etc/hostname
	I0130 20:38:22.548206   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208583
	
	I0130 20:38:22.548240   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.550933   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.551316   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.551345   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.551492   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:22.551690   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.551821   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.551934   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:22.552129   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:22.552425   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:22.552441   45037 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208583' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208583/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208583' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:38:22.687464   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:38:22.687491   45037 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:38:22.687536   45037 buildroot.go:174] setting up certificates
	I0130 20:38:22.687551   45037 provision.go:83] configureAuth start
	I0130 20:38:22.687562   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetMachineName
	I0130 20:38:22.687813   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:22.690307   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.690664   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.690686   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.690855   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.693139   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.693426   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.693462   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.693597   45037 provision.go:138] copyHostCerts
	I0130 20:38:22.693667   45037 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:38:22.693686   45037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:38:22.693766   45037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:38:22.693866   45037 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:38:22.693876   45037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:38:22.693912   45037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:38:22.693986   45037 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:38:22.693997   45037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:38:22.694036   45037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:38:22.694122   45037 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208583 san=[192.168.61.63 192.168.61.63 localhost 127.0.0.1 minikube embed-certs-208583]
	I0130 20:38:22.862847   45037 provision.go:172] copyRemoteCerts
	I0130 20:38:22.862902   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:38:22.862921   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.865533   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.865812   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.865842   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.866006   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:22.866200   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.866315   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:22.866496   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:22.959746   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:38:22.982164   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 20:38:23.004087   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 20:38:23.025875   45037 provision.go:86] duration metric: configureAuth took 338.306374ms
	I0130 20:38:23.025896   45037 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:38:23.026090   45037 config.go:182] Loaded profile config "embed-certs-208583": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:38:23.026173   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.028688   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.028913   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.028946   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.029125   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.029277   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.029430   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.029550   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.029679   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:23.029980   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:23.029995   45037 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:38:23.337986   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:38:23.338008   45037 machine.go:91] provisioned docker machine in 935.018208ms
	I0130 20:38:23.338016   45037 start.go:300] post-start starting for "embed-certs-208583" (driver="kvm2")
	I0130 20:38:23.338026   45037 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:38:23.338051   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.338301   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:38:23.338327   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.341005   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.341398   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.341429   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.341516   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.341686   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.341825   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.341997   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:23.437500   45037 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:38:23.441705   45037 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:38:23.441724   45037 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:38:23.441784   45037 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:38:23.441851   45037 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:38:23.441937   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:38:23.450700   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:23.471898   45037 start.go:303] post-start completed in 133.870929ms
	I0130 20:38:23.471916   45037 fix.go:56] fixHost completed within 20.160401625s
	I0130 20:38:23.471940   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.474341   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.474659   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.474695   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.474793   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.474984   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.475181   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.475341   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.475515   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:23.475878   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:23.475891   45037 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:38:23.603819   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647103.552984334
	
	I0130 20:38:23.603841   45037 fix.go:206] guest clock: 1706647103.552984334
	I0130 20:38:23.603848   45037 fix.go:219] Guest: 2024-01-30 20:38:23.552984334 +0000 UTC Remote: 2024-01-30 20:38:23.471920461 +0000 UTC m=+289.780929635 (delta=81.063873ms)
	I0130 20:38:23.603879   45037 fix.go:190] guest clock delta is within tolerance: 81.063873ms
	I0130 20:38:23.603885   45037 start.go:83] releasing machines lock for "embed-certs-208583", held for 20.292396099s
	I0130 20:38:23.603916   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.604168   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:23.606681   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.607027   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.607060   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.607190   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.607703   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.607876   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.607947   45037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:38:23.607999   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.608115   45037 ssh_runner.go:195] Run: cat /version.json
	I0130 20:38:23.608140   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.610693   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611052   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.611078   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611154   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611199   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.611380   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.611530   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.611585   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.611625   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611666   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:23.611790   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.611935   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.612081   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.612197   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:23.725868   45037 ssh_runner.go:195] Run: systemctl --version
	I0130 20:38:23.731516   45037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:38:23.872093   45037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:38:23.878418   45037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:38:23.878493   45037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:38:23.892910   45037 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:38:23.892934   45037 start.go:475] detecting cgroup driver to use...
	I0130 20:38:23.893007   45037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:38:23.905950   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:38:23.917437   45037 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:38:23.917484   45037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:38:23.929241   45037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:38:23.940979   45037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:38:24.045106   45037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:38:24.160413   45037 docker.go:233] disabling docker service ...
	I0130 20:38:24.160486   45037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:38:24.173684   45037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:38:24.185484   45037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:38:24.308292   45037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:38:24.430021   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:38:24.442910   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:38:24.460145   45037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:38:24.460211   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.469163   45037 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:38:24.469225   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.478396   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.487374   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.496306   45037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:38:24.505283   45037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:38:24.512919   45037 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:38:24.512974   45037 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:38:24.523939   45037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:38:24.533002   45037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:38:24.665917   45037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:38:24.839797   45037 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:38:24.839866   45037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:38:24.851397   45037 start.go:543] Will wait 60s for crictl version
	I0130 20:38:24.851454   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:38:24.855227   45037 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:38:24.888083   45037 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:38:24.888163   45037 ssh_runner.go:195] Run: crio --version
	I0130 20:38:24.934626   45037 ssh_runner.go:195] Run: crio --version
	I0130 20:38:24.984233   45037 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 20:38:23.627365   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Start
	I0130 20:38:23.627532   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Ensuring networks are active...
	I0130 20:38:23.628247   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Ensuring network default is active
	I0130 20:38:23.628650   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Ensuring network mk-default-k8s-diff-port-877742 is active
	I0130 20:38:23.629109   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Getting domain xml...
	I0130 20:38:23.629715   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Creating domain...
	I0130 20:38:24.849156   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting to get IP...
	I0130 20:38:24.850261   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:24.850701   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:24.850729   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:24.850645   46249 retry.go:31] will retry after 259.328149ms: waiting for machine to come up
	I0130 20:38:25.112451   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.112941   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.112971   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:25.112905   46249 retry.go:31] will retry after 283.994822ms: waiting for machine to come up
	I0130 20:38:25.398452   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.398937   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.398968   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:25.398904   46249 retry.go:31] will retry after 348.958329ms: waiting for machine to come up
	I0130 20:38:24.985681   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:24.988666   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:24.989016   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:24.989042   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:24.989288   45037 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0130 20:38:24.993626   45037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:25.005749   45037 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 20:38:25.005817   45037 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:25.047605   45037 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 20:38:25.047674   45037 ssh_runner.go:195] Run: which lz4
	I0130 20:38:25.051662   45037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 20:38:25.055817   45037 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:38:25.055849   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 20:38:26.895244   45037 crio.go:444] Took 1.843605 seconds to copy over tarball
	I0130 20:38:26.895332   45037 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 20:38:25.749560   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.750020   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.750048   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:25.749985   46249 retry.go:31] will retry after 597.656366ms: waiting for machine to come up
	I0130 20:38:26.349518   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.349957   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.350004   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:26.349929   46249 retry.go:31] will retry after 600.926171ms: waiting for machine to come up
	I0130 20:38:26.952713   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.953319   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.953343   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:26.953276   46249 retry.go:31] will retry after 654.976543ms: waiting for machine to come up
	I0130 20:38:27.610017   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:27.610464   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:27.610494   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:27.610413   46249 retry.go:31] will retry after 881.075627ms: waiting for machine to come up
	I0130 20:38:28.493641   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:28.494188   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:28.494218   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:28.494136   46249 retry.go:31] will retry after 1.436302447s: waiting for machine to come up
	I0130 20:38:29.932271   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:29.932794   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:29.932825   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:29.932729   46249 retry.go:31] will retry after 1.394659615s: waiting for machine to come up
	I0130 20:38:29.834721   45037 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.939351369s)
	I0130 20:38:29.834746   45037 crio.go:451] Took 2.939470 seconds to extract the tarball
	I0130 20:38:29.834754   45037 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 20:38:29.875618   45037 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:29.921569   45037 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 20:38:29.921593   45037 cache_images.go:84] Images are preloaded, skipping loading
	I0130 20:38:29.921661   45037 ssh_runner.go:195] Run: crio config
	I0130 20:38:29.981565   45037 cni.go:84] Creating CNI manager for ""
	I0130 20:38:29.981590   45037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:38:29.981612   45037 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:38:29.981637   45037 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.63 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-208583 NodeName:embed-certs-208583 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:38:29.981824   45037 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-208583"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:38:29.981919   45037 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-208583 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-208583 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:38:29.981984   45037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 20:38:29.991601   45037 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:38:29.991665   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:38:30.000815   45037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0130 20:38:30.016616   45037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:38:30.032999   45037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0130 20:38:30.052735   45037 ssh_runner.go:195] Run: grep 192.168.61.63	control-plane.minikube.internal$ /etc/hosts
	I0130 20:38:30.057008   45037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:30.069968   45037 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583 for IP: 192.168.61.63
	I0130 20:38:30.070004   45037 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:30.070164   45037 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:38:30.070201   45037 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:38:30.070263   45037 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/client.key
	I0130 20:38:30.070323   45037 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/apiserver.key.9879da99
	I0130 20:38:30.070370   45037 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/proxy-client.key
	I0130 20:38:30.070496   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:38:30.070531   45037 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:38:30.070541   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:38:30.070561   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:38:30.070586   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:38:30.070612   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:38:30.070659   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:30.071211   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:38:30.098665   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 20:38:30.125013   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:38:30.150013   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 20:38:30.177206   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:38:30.202683   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:38:30.225774   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:38:30.249090   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:38:30.274681   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:38:30.302316   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:38:30.326602   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:38:30.351136   45037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:38:30.368709   45037 ssh_runner.go:195] Run: openssl version
	I0130 20:38:30.374606   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:38:30.386421   45037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:38:30.391240   45037 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:38:30.391314   45037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:38:30.397082   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:38:30.409040   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:38:30.420910   45037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:30.425929   45037 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:30.425971   45037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:30.431609   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:38:30.443527   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:38:30.455200   45037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:38:30.460242   45037 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:38:30.460307   45037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:38:30.466225   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:38:30.479406   45037 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:38:30.485331   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:38:30.493468   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:38:30.499465   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:38:30.505394   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:38:30.511152   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:38:30.516951   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:38:30.522596   45037 kubeadm.go:404] StartCluster: {Name:embed-certs-208583 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-208583 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.63 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:38:30.522698   45037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:38:30.522747   45037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:38:30.559669   45037 cri.go:89] found id: ""
	I0130 20:38:30.559740   45037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:38:30.571465   45037 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:38:30.571487   45037 kubeadm.go:636] restartCluster start
	I0130 20:38:30.571539   45037 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:38:30.581398   45037 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:30.582366   45037 kubeconfig.go:92] found "embed-certs-208583" server: "https://192.168.61.63:8443"
	I0130 20:38:30.584719   45037 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:38:30.593986   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:30.594031   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:30.606926   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:31.094476   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:31.094545   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:31.106991   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:31.594553   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:31.594633   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:31.607554   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:32.094029   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:32.094114   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:32.107447   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:32.594998   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:32.595079   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:32.607929   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:33.094468   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:33.094562   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:33.111525   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:33.594502   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:33.594578   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:33.611216   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:31.329366   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:31.329720   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:31.329739   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:31.329672   46249 retry.go:31] will retry after 1.8606556s: waiting for machine to come up
	I0130 20:38:33.192538   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:33.192916   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:33.192938   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:33.192873   46249 retry.go:31] will retry after 2.294307307s: waiting for machine to come up
	I0130 20:38:34.094151   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:34.094223   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:34.106531   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:34.594098   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:34.594172   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:34.606286   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:35.094891   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:35.094995   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:35.106949   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:35.594452   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:35.594532   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:35.611066   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:36.094606   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:36.094684   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:36.110348   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:36.595021   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:36.595084   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:36.609884   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:37.094347   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:37.094445   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:37.106709   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:37.594248   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:37.594348   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:37.610367   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:38.095063   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:38.095141   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:38.107195   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:38.594024   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:38.594139   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:38.606041   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:35.489701   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:35.490129   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:35.490166   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:35.490071   46249 retry.go:31] will retry after 2.434575636s: waiting for machine to come up
	I0130 20:38:37.927709   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:37.928168   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:37.928198   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:37.928111   46249 retry.go:31] will retry after 3.073200884s: waiting for machine to come up
	I0130 20:38:39.094490   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:39.094572   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:39.106154   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:39.594866   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:39.594961   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:39.606937   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:40.094464   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:40.094549   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:40.106068   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:40.594556   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:40.594637   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:40.606499   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:40.606523   45037 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:38:40.606544   45037 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:38:40.606554   45037 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:38:40.606605   45037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:38:40.646444   45037 cri.go:89] found id: ""
	I0130 20:38:40.646505   45037 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:38:40.661886   45037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:38:40.670948   45037 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:38:40.671008   45037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:38:40.679749   45037 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:38:40.679771   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:40.780597   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:41.804175   45037 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.023537725s)
	I0130 20:38:41.804214   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:41.999624   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:42.103064   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:42.173522   45037 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:38:42.173628   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:42.674417   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:43.173996   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:43.674137   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:41.004686   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:41.005140   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:41.005165   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:41.005085   46249 retry.go:31] will retry after 3.766414086s: waiting for machine to come up
	I0130 20:38:44.773568   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.774049   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Found IP for machine: 192.168.72.52
	I0130 20:38:44.774082   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has current primary IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.774099   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Reserving static IP address...
	I0130 20:38:44.774494   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-877742", mac: "52:54:00:c4:e0:0b", ip: "192.168.72.52"} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.774517   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Reserved static IP address: 192.168.72.52
	I0130 20:38:44.774543   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | skip adding static IP to network mk-default-k8s-diff-port-877742 - found existing host DHCP lease matching {name: "default-k8s-diff-port-877742", mac: "52:54:00:c4:e0:0b", ip: "192.168.72.52"}
	I0130 20:38:44.774561   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for SSH to be available...
	I0130 20:38:44.774589   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Getting to WaitForSSH function...
	I0130 20:38:44.776761   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.777079   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.777114   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.777210   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Using SSH client type: external
	I0130 20:38:44.777242   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa (-rw-------)
	I0130 20:38:44.777299   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:38:44.777332   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | About to run SSH command:
	I0130 20:38:44.777352   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | exit 0
	I0130 20:38:44.875219   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | SSH cmd err, output: <nil>: 
	I0130 20:38:44.875515   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetConfigRaw
	I0130 20:38:44.876243   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:44.878633   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.879035   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.879069   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.879336   45441 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/config.json ...
	I0130 20:38:44.879504   45441 machine.go:88] provisioning docker machine ...
	I0130 20:38:44.879522   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:44.879734   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetMachineName
	I0130 20:38:44.879889   45441 buildroot.go:166] provisioning hostname "default-k8s-diff-port-877742"
	I0130 20:38:44.879932   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetMachineName
	I0130 20:38:44.880102   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:44.882426   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.882753   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.882777   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.882927   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:44.883099   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:44.883246   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:44.883409   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:44.883569   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:44.884066   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:44.884092   45441 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-877742 && echo "default-k8s-diff-port-877742" | sudo tee /etc/hostname
	I0130 20:38:45.030801   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-877742
	
	I0130 20:38:45.030847   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.033532   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.033897   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.033955   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.034094   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.034309   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.034489   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.034644   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.034826   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:45.035168   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:45.035187   45441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-877742' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-877742/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-877742' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:38:45.175807   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:38:45.175849   45441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:38:45.175884   45441 buildroot.go:174] setting up certificates
	I0130 20:38:45.175907   45441 provision.go:83] configureAuth start
	I0130 20:38:45.175923   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetMachineName
	I0130 20:38:45.176200   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:45.179102   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.179489   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.179526   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.179664   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.182178   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.182532   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.182560   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.182666   45441 provision.go:138] copyHostCerts
	I0130 20:38:45.182716   45441 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:38:45.182728   45441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:38:45.182788   45441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:38:45.182895   45441 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:38:45.182910   45441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:38:45.182973   45441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:38:45.183054   45441 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:38:45.183065   45441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:38:45.183090   45441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:38:45.183158   45441 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-877742 san=[192.168.72.52 192.168.72.52 localhost 127.0.0.1 minikube default-k8s-diff-port-877742]
	I0130 20:38:45.352895   45441 provision.go:172] copyRemoteCerts
	I0130 20:38:45.352960   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:38:45.352986   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.355820   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.356141   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.356169   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.356343   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.356540   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.356717   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.356868   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:46.136084   45819 start.go:369] acquired machines lock for "old-k8s-version-150971" in 2m36.388823473s
	I0130 20:38:46.136157   45819 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:38:46.136169   45819 fix.go:54] fixHost starting: 
	I0130 20:38:46.136624   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:46.136669   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:46.153210   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33685
	I0130 20:38:46.153604   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:46.154080   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:38:46.154104   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:46.154422   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:46.154630   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:38:46.154771   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:38:46.156388   45819 fix.go:102] recreateIfNeeded on old-k8s-version-150971: state=Stopped err=<nil>
	I0130 20:38:46.156420   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	W0130 20:38:46.156613   45819 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:38:46.158388   45819 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-150971" ...
	I0130 20:38:45.456511   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:38:45.483324   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0130 20:38:45.510567   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 20:38:45.535387   45441 provision.go:86] duration metric: configureAuth took 359.467243ms
	I0130 20:38:45.535421   45441 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:38:45.535659   45441 config.go:182] Loaded profile config "default-k8s-diff-port-877742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:38:45.535749   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.538712   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.539176   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.539214   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.539334   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.539574   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.539741   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.539995   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.540244   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:45.540770   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:45.540796   45441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:38:45.877778   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:38:45.877813   45441 machine.go:91] provisioned docker machine in 998.294632ms
	I0130 20:38:45.877825   45441 start.go:300] post-start starting for "default-k8s-diff-port-877742" (driver="kvm2")
	I0130 20:38:45.877845   45441 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:38:45.877869   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:45.878190   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:38:45.878224   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.881167   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.881533   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.881566   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.881704   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.881880   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.882064   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.882207   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:45.972932   45441 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:38:45.977412   45441 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:38:45.977437   45441 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:38:45.977514   45441 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:38:45.977593   45441 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:38:45.977694   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:38:45.985843   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:46.008484   45441 start.go:303] post-start completed in 130.643321ms
	I0130 20:38:46.008509   45441 fix.go:56] fixHost completed within 22.404447995s
	I0130 20:38:46.008533   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:46.011463   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.011901   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.011944   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.012088   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:46.012304   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.012500   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.012647   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:46.012803   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:46.013202   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:46.013226   45441 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:38:46.135930   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647126.077813825
	
	I0130 20:38:46.135955   45441 fix.go:206] guest clock: 1706647126.077813825
	I0130 20:38:46.135965   45441 fix.go:219] Guest: 2024-01-30 20:38:46.077813825 +0000 UTC Remote: 2024-01-30 20:38:46.008513384 +0000 UTC m=+250.621109629 (delta=69.300441ms)
	I0130 20:38:46.135988   45441 fix.go:190] guest clock delta is within tolerance: 69.300441ms
	I0130 20:38:46.135993   45441 start.go:83] releasing machines lock for "default-k8s-diff-port-877742", held for 22.53196506s
	I0130 20:38:46.136021   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.136315   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:46.139211   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.139549   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.139581   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.139695   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.140243   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.140427   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.140507   45441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:38:46.140555   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:46.140639   45441 ssh_runner.go:195] Run: cat /version.json
	I0130 20:38:46.140661   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:46.143348   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.143614   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.143651   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.143675   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.143843   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:46.144027   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.144081   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.144110   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.144228   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:46.144253   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:46.144434   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.144434   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:46.144580   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:46.144707   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:46.241499   45441 ssh_runner.go:195] Run: systemctl --version
	I0130 20:38:46.264180   45441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:38:46.417654   45441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:38:46.423377   45441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:38:46.423450   45441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:38:46.439524   45441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:38:46.439549   45441 start.go:475] detecting cgroup driver to use...
	I0130 20:38:46.439612   45441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:38:46.456668   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:38:46.469494   45441 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:38:46.469547   45441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:38:46.482422   45441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:38:46.496031   45441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:38:46.601598   45441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:38:46.710564   45441 docker.go:233] disabling docker service ...
	I0130 20:38:46.710633   45441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:38:46.724084   45441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:38:46.736019   45441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:38:46.853310   45441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:38:46.976197   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:38:46.991033   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:38:47.009961   45441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:38:47.010028   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.019749   45441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:38:47.019822   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.032215   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.043642   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.056005   45441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:38:47.068954   45441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:38:47.079752   45441 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:38:47.079823   45441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:38:47.096106   45441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:38:47.109074   45441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:38:47.243783   45441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:38:47.468971   45441 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:38:47.469055   45441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:38:47.474571   45441 start.go:543] Will wait 60s for crictl version
	I0130 20:38:47.474646   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:38:47.479007   45441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:38:47.525155   45441 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:38:47.525259   45441 ssh_runner.go:195] Run: crio --version
	I0130 20:38:47.582308   45441 ssh_runner.go:195] Run: crio --version
	I0130 20:38:47.648689   45441 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 20:38:44.173930   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:44.197493   45037 api_server.go:72] duration metric: took 2.023971316s to wait for apiserver process to appear ...
	I0130 20:38:44.197522   45037 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:38:44.197545   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:44.198089   45037 api_server.go:269] stopped: https://192.168.61.63:8443/healthz: Get "https://192.168.61.63:8443/healthz": dial tcp 192.168.61.63:8443: connect: connection refused
	I0130 20:38:44.697622   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:48.683401   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:38:48.683435   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:38:48.683452   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:46.159722   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Start
	I0130 20:38:46.159892   45819 main.go:141] libmachine: (old-k8s-version-150971) Ensuring networks are active...
	I0130 20:38:46.160650   45819 main.go:141] libmachine: (old-k8s-version-150971) Ensuring network default is active
	I0130 20:38:46.160960   45819 main.go:141] libmachine: (old-k8s-version-150971) Ensuring network mk-old-k8s-version-150971 is active
	I0130 20:38:46.161374   45819 main.go:141] libmachine: (old-k8s-version-150971) Getting domain xml...
	I0130 20:38:46.162142   45819 main.go:141] libmachine: (old-k8s-version-150971) Creating domain...
	I0130 20:38:47.490526   45819 main.go:141] libmachine: (old-k8s-version-150971) Waiting to get IP...
	I0130 20:38:47.491491   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:47.491971   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:47.492059   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:47.491949   46425 retry.go:31] will retry after 201.906522ms: waiting for machine to come up
	I0130 20:38:47.695709   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:47.696195   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:47.696226   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:47.696146   46425 retry.go:31] will retry after 347.547284ms: waiting for machine to come up
	I0130 20:38:48.045541   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:48.046078   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:48.046102   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:48.046013   46425 retry.go:31] will retry after 373.23424ms: waiting for machine to come up
	I0130 20:38:48.420618   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:48.421238   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:48.421263   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:48.421188   46425 retry.go:31] will retry after 515.166265ms: waiting for machine to come up
	I0130 20:38:48.937713   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:48.942554   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:48.942581   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:48.942448   46425 retry.go:31] will retry after 626.563548ms: waiting for machine to come up
	I0130 20:38:49.570078   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:49.570658   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:49.570689   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:49.570550   46425 retry.go:31] will retry after 618.022034ms: waiting for machine to come up
	I0130 20:38:48.786797   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:38:48.786825   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:38:48.786848   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:48.837579   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:38:48.837608   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:38:49.198568   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:49.206091   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:38:49.206135   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:38:49.697669   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:49.707878   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:38:49.707912   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:38:50.198039   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:50.209003   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 200:
	ok
	I0130 20:38:50.228887   45037 api_server.go:141] control plane version: v1.28.4
	I0130 20:38:50.228967   45037 api_server.go:131] duration metric: took 6.031436808s to wait for apiserver health ...
	I0130 20:38:50.228981   45037 cni.go:84] Creating CNI manager for ""
	I0130 20:38:50.228991   45037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:38:50.230543   45037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:38:47.649943   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:47.653185   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:47.653623   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:47.653664   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:47.653933   45441 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0130 20:38:47.659385   45441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:47.675851   45441 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 20:38:47.675918   45441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:47.724799   45441 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 20:38:47.724883   45441 ssh_runner.go:195] Run: which lz4
	I0130 20:38:47.729563   45441 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 20:38:47.735015   45441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:38:47.735048   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 20:38:49.612191   45441 crio.go:444] Took 1.882668 seconds to copy over tarball
	I0130 20:38:49.612263   45441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 20:38:50.231895   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:38:50.262363   45037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:38:50.290525   45037 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:38:50.307654   45037 system_pods.go:59] 8 kube-system pods found
	I0130 20:38:50.307696   45037 system_pods.go:61] "coredns-5dd5756b68-jqzzv" [59f362b6-606e-4bcd-b5eb-c8822aaf8b9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:38:50.307708   45037 system_pods.go:61] "etcd-embed-certs-208583" [798094bf-2aac-4f39-afc1-4f873bdd08ee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 20:38:50.307721   45037 system_pods.go:61] "kube-apiserver-embed-certs-208583" [b96b9f6e-b36a-47bf-8f6d-01f883501766] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 20:38:50.307736   45037 system_pods.go:61] "kube-controller-manager-embed-certs-208583" [3dbd9e29-5c95-40f5-acd8-9767f6ce7a03] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 20:38:50.307751   45037 system_pods.go:61] "kube-proxy-g7q5t" [47f109e0-7a56-472f-8c7e-ba2b138de352] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 20:38:50.307760   45037 system_pods.go:61] "kube-scheduler-embed-certs-208583" [e8a37eb1-599f-478f-bbc1-b44b1020f291] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 20:38:50.307769   45037 system_pods.go:61] "metrics-server-57f55c9bc5-ghg9n" [37700115-83e9-440a-b396-56f50adb6311] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:38:50.307788   45037 system_pods.go:61] "storage-provisioner" [15108916-a630-4208-99f7-5706db407b22] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:38:50.307810   45037 system_pods.go:74] duration metric: took 17.261001ms to wait for pod list to return data ...
	I0130 20:38:50.307820   45037 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:38:50.317889   45037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:38:50.317926   45037 node_conditions.go:123] node cpu capacity is 2
	I0130 20:38:50.317939   45037 node_conditions.go:105] duration metric: took 10.11037ms to run NodePressure ...
	I0130 20:38:50.317960   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:50.681835   45037 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:38:50.688460   45037 kubeadm.go:787] kubelet initialised
	I0130 20:38:50.688488   45037 kubeadm.go:788] duration metric: took 6.61921ms waiting for restarted kubelet to initialise ...
	I0130 20:38:50.688498   45037 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:38:50.696051   45037 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.703680   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.703713   45037 pod_ready.go:81] duration metric: took 7.634057ms waiting for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.703724   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.703739   45037 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.710192   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "etcd-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.710216   45037 pod_ready.go:81] duration metric: took 6.467699ms waiting for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.710227   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "etcd-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.710235   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.720866   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.720894   45037 pod_ready.go:81] duration metric: took 10.648867ms waiting for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.720906   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.720914   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.731095   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.731162   45037 pod_ready.go:81] duration metric: took 10.237453ms waiting for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.731181   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.731190   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:51.097357   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-proxy-g7q5t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.097391   45037 pod_ready.go:81] duration metric: took 366.190232ms waiting for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:51.097404   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-proxy-g7q5t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.097413   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:51.499223   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.499261   45037 pod_ready.go:81] duration metric: took 401.839475ms waiting for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:51.499293   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.499303   45037 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:51.895725   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.895779   45037 pod_ready.go:81] duration metric: took 396.460908ms waiting for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:51.895798   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.895811   45037 pod_ready.go:38] duration metric: took 1.207302604s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:38:51.895836   45037 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:38:51.909431   45037 ops.go:34] apiserver oom_adj: -16
	I0130 20:38:51.909454   45037 kubeadm.go:640] restartCluster took 21.337960534s
	I0130 20:38:51.909472   45037 kubeadm.go:406] StartCluster complete in 21.386877314s
	I0130 20:38:51.909491   45037 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:51.909571   45037 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:38:51.911558   45037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:51.911793   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:38:51.911888   45037 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:38:51.911974   45037 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-208583"
	I0130 20:38:51.911995   45037 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-208583"
	W0130 20:38:51.912007   45037 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:38:51.912044   45037 config.go:182] Loaded profile config "embed-certs-208583": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:38:51.912101   45037 host.go:66] Checking if "embed-certs-208583" exists ...
	I0130 20:38:51.912138   45037 addons.go:69] Setting default-storageclass=true in profile "embed-certs-208583"
	I0130 20:38:51.912168   45037 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-208583"
	I0130 20:38:51.912131   45037 addons.go:69] Setting metrics-server=true in profile "embed-certs-208583"
	I0130 20:38:51.912238   45037 addons.go:234] Setting addon metrics-server=true in "embed-certs-208583"
	W0130 20:38:51.912250   45037 addons.go:243] addon metrics-server should already be in state true
	I0130 20:38:51.912328   45037 host.go:66] Checking if "embed-certs-208583" exists ...
	I0130 20:38:51.912537   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.912561   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.912583   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.912603   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.912686   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.912711   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.923647   45037 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-208583" context rescaled to 1 replicas
	I0130 20:38:51.923691   45037 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.63 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:38:51.926120   45037 out.go:177] * Verifying Kubernetes components...
	I0130 20:38:51.929413   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:38:51.930498   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I0130 20:38:51.930911   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0130 20:38:51.931075   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.931580   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.931988   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.932001   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.932296   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.932730   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.932756   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.933221   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.933273   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.933917   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.934492   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.934524   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.936079   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42667
	I0130 20:38:51.936488   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.937121   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.937144   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.937525   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.937703   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.941576   45037 addons.go:234] Setting addon default-storageclass=true in "embed-certs-208583"
	W0130 20:38:51.941597   45037 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:38:51.941623   45037 host.go:66] Checking if "embed-certs-208583" exists ...
	I0130 20:38:51.942033   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.942072   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.953268   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44577
	I0130 20:38:51.953715   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43785
	I0130 20:38:51.953863   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.954633   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.954659   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.954742   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.955212   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.955233   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.955318   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.955530   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.955663   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.955853   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.957839   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:51.958080   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:51.960896   45037 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:38:51.961493   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37549
	I0130 20:38:51.962677   45037 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:38:51.962838   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:38:51.964463   45037 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:38:51.964487   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:38:51.964518   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:51.964486   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:38:51.964554   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:51.963107   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.965261   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.965274   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.965656   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.966482   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.966520   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.968651   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.969034   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:51.969062   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.969307   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:51.969493   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:51.969580   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.969656   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:51.969809   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:51.970328   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:51.970372   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:51.970391   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.970521   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:51.970706   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:51.970866   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:51.985009   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33297
	I0130 20:38:51.985512   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.986096   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.986119   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.986558   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.986778   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.988698   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:51.991566   45037 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:38:51.991620   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:38:51.991647   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:51.994416   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.995367   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:51.995370   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:51.995439   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.995585   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:51.995740   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:51.995885   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:52.125074   45037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:38:52.140756   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:38:52.140787   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:38:52.180728   45037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:38:52.195559   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:38:52.195587   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:38:52.235770   45037 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0130 20:38:52.235779   45037 node_ready.go:35] waiting up to 6m0s for node "embed-certs-208583" to be "Ready" ...
	I0130 20:38:52.243414   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:38:52.243444   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:38:52.349604   45037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:38:54.111857   45037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.931041237s)
	I0130 20:38:54.111916   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.111938   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112013   45037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.986903299s)
	I0130 20:38:54.112051   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.112065   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112337   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112383   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112398   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.112403   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112411   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.112421   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.112426   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112434   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.112423   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112450   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112653   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112728   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112748   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.112770   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112797   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112806   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.119872   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.119893   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.120118   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.120138   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.121373   45037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.771724991s)
	I0130 20:38:54.121408   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.121421   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.121619   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.121636   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.121647   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.121655   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.121837   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.121853   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.121875   45037 addons.go:470] Verifying addon metrics-server=true in "embed-certs-208583"
	I0130 20:38:54.332655   45037 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 20:38:50.189837   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:50.190326   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:50.190352   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:50.190273   46425 retry.go:31] will retry after 843.505616ms: waiting for machine to come up
	I0130 20:38:51.035080   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:51.035482   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:51.035511   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:51.035454   46425 retry.go:31] will retry after 1.230675294s: waiting for machine to come up
	I0130 20:38:52.267754   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:52.268342   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:52.268365   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:52.268298   46425 retry.go:31] will retry after 1.516187998s: waiting for machine to come up
	I0130 20:38:53.785734   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:53.786142   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:53.786173   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:53.786084   46425 retry.go:31] will retry after 2.020274977s: waiting for machine to come up
	I0130 20:38:53.002777   45441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.390479779s)
	I0130 20:38:53.002812   45441 crio.go:451] Took 3.390595 seconds to extract the tarball
	I0130 20:38:53.002824   45441 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 20:38:53.059131   45441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:53.121737   45441 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 20:38:53.121765   45441 cache_images.go:84] Images are preloaded, skipping loading
	I0130 20:38:53.121837   45441 ssh_runner.go:195] Run: crio config
	I0130 20:38:53.187904   45441 cni.go:84] Creating CNI manager for ""
	I0130 20:38:53.187931   45441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:38:53.187953   45441 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:38:53.187982   45441 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.52 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-877742 NodeName:default-k8s-diff-port-877742 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:38:53.188157   45441 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.52
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-877742"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.52
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.52"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:38:53.188253   45441 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-877742 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-877742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0130 20:38:53.188320   45441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 20:38:53.200851   45441 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:38:53.200938   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:38:53.212897   45441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0130 20:38:53.231805   45441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:38:53.253428   45441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0130 20:38:53.274041   45441 ssh_runner.go:195] Run: grep 192.168.72.52	control-plane.minikube.internal$ /etc/hosts
	I0130 20:38:53.278499   45441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.52	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:53.295089   45441 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742 for IP: 192.168.72.52
	I0130 20:38:53.295126   45441 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:53.295326   45441 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:38:53.295393   45441 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:38:53.295497   45441 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.key
	I0130 20:38:53.295581   45441 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/apiserver.key.02e1fdc8
	I0130 20:38:53.295637   45441 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/proxy-client.key
	I0130 20:38:53.295774   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:38:53.295813   45441 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:38:53.295827   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:38:53.295864   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:38:53.295912   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:38:53.295948   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:38:53.296012   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:53.296828   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:38:53.326150   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 20:38:53.356286   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:38:53.384496   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 20:38:53.414403   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:38:53.440628   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:38:53.465452   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:38:53.494321   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:38:53.520528   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:38:53.543933   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:38:53.569293   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:38:53.594995   45441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:38:53.615006   45441 ssh_runner.go:195] Run: openssl version
	I0130 20:38:53.622442   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:38:53.636482   45441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:38:53.642501   45441 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:38:53.642572   45441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:38:53.649251   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:38:53.661157   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:38:53.673453   45441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:53.678369   45441 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:53.678439   45441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:53.684812   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:38:53.696906   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:38:53.710065   45441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:38:53.714715   45441 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:38:53.714776   45441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:38:53.720458   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:38:53.733050   45441 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:38:53.737894   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:38:53.744337   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:38:53.750326   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:38:53.756139   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:38:53.761883   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:38:53.767633   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:38:53.773367   45441 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-877742 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-877742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.52 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:38:53.773480   45441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:38:53.773551   45441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:38:53.815095   45441 cri.go:89] found id: ""
	I0130 20:38:53.815159   45441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:38:53.826497   45441 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:38:53.826521   45441 kubeadm.go:636] restartCluster start
	I0130 20:38:53.826570   45441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:38:53.837155   45441 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:53.838622   45441 kubeconfig.go:92] found "default-k8s-diff-port-877742" server: "https://192.168.72.52:8444"
	I0130 20:38:53.841776   45441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:38:53.852124   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:53.852191   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:53.864432   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:54.353064   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:54.353141   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:54.365422   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:54.853083   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:54.853170   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:54.869932   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:55.352281   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:55.352360   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:55.369187   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:54.550999   45037 addons.go:505] enable addons completed in 2.639107358s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 20:38:54.692017   45037 node_ready.go:58] node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:56.740251   45037 node_ready.go:58] node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:55.809310   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:55.809708   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:55.809741   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:55.809655   46425 retry.go:31] will retry after 1.997080797s: waiting for machine to come up
	I0130 20:38:57.808397   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:57.808798   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:57.808829   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:57.808744   46425 retry.go:31] will retry after 3.605884761s: waiting for machine to come up
	I0130 20:38:55.852241   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:55.852356   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:55.864923   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:56.352455   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:56.352559   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:56.368458   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:56.853090   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:56.853175   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:56.869148   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:57.352965   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:57.353055   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:57.370697   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:57.852261   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:57.852391   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:57.868729   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:58.352147   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:58.352250   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:58.368543   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:58.852300   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:58.852373   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:58.868594   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:59.353039   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:59.353110   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:59.365593   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:59.852147   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:59.852276   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:59.865561   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:00.353077   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:00.353186   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:00.370006   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:59.242842   45037 node_ready.go:58] node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:59.739830   45037 node_ready.go:49] node "embed-certs-208583" has status "Ready":"True"
	I0130 20:38:59.739851   45037 node_ready.go:38] duration metric: took 7.503983369s waiting for node "embed-certs-208583" to be "Ready" ...
	I0130 20:38:59.739859   45037 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:38:59.746243   45037 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.751722   45037 pod_ready.go:92] pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace has status "Ready":"True"
	I0130 20:38:59.751745   45037 pod_ready.go:81] duration metric: took 5.480115ms waiting for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.751752   45037 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.757152   45037 pod_ready.go:92] pod "etcd-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:38:59.757175   45037 pod_ready.go:81] duration metric: took 5.417291ms waiting for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.757184   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.762156   45037 pod_ready.go:92] pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:38:59.762231   45037 pod_ready.go:81] duration metric: took 4.985076ms waiting for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.762267   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:01.773853   45037 pod_ready.go:102] pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:01.415831   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:01.416304   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:39:01.416345   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:39:01.416273   46425 retry.go:31] will retry after 3.558433109s: waiting for machine to come up
	I0130 20:39:00.852444   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:00.852545   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:00.865338   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:01.353002   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:01.353101   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:01.366419   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:01.853034   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:01.853114   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:01.866142   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:02.352652   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:02.352752   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:02.364832   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:02.852325   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:02.852406   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:02.864013   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:03.352408   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:03.352518   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:03.363939   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:03.853126   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:03.853200   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:03.865047   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:03.865084   45441 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:39:03.865094   45441 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:39:03.865105   45441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:39:03.865154   45441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:03.904863   45441 cri.go:89] found id: ""
	I0130 20:39:03.904932   45441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:39:03.922225   45441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:39:03.931861   45441 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:39:03.931915   45441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:03.941185   45441 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:03.941205   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.064230   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.627940   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.816900   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.893059   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.986288   45441 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:39:04.986362   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:06.448368   44923 start.go:369] acquired machines lock for "no-preload-473743" in 58.134425603s
	I0130 20:39:06.448435   44923 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:39:06.448443   44923 fix.go:54] fixHost starting: 
	I0130 20:39:06.448866   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:39:06.448900   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:39:06.468570   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43389
	I0130 20:39:06.468965   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:39:06.469552   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:39:06.469587   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:39:06.469950   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:39:06.470153   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:06.470312   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:39:06.472312   44923 fix.go:102] recreateIfNeeded on no-preload-473743: state=Stopped err=<nil>
	I0130 20:39:06.472337   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	W0130 20:39:06.472495   44923 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:39:06.474460   44923 out.go:177] * Restarting existing kvm2 VM for "no-preload-473743" ...
	I0130 20:39:04.976314   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.976787   45819 main.go:141] libmachine: (old-k8s-version-150971) Found IP for machine: 192.168.39.16
	I0130 20:39:04.976820   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has current primary IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.976830   45819 main.go:141] libmachine: (old-k8s-version-150971) Reserving static IP address...
	I0130 20:39:04.977271   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "old-k8s-version-150971", mac: "52:54:00:6e:fe:f8", ip: "192.168.39.16"} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:04.977300   45819 main.go:141] libmachine: (old-k8s-version-150971) Reserved static IP address: 192.168.39.16
	I0130 20:39:04.977325   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | skip adding static IP to network mk-old-k8s-version-150971 - found existing host DHCP lease matching {name: "old-k8s-version-150971", mac: "52:54:00:6e:fe:f8", ip: "192.168.39.16"}
	I0130 20:39:04.977345   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Getting to WaitForSSH function...
	I0130 20:39:04.977361   45819 main.go:141] libmachine: (old-k8s-version-150971) Waiting for SSH to be available...
	I0130 20:39:04.979621   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.980015   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:04.980042   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.980138   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Using SSH client type: external
	I0130 20:39:04.980164   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa (-rw-------)
	I0130 20:39:04.980206   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:39:04.980221   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | About to run SSH command:
	I0130 20:39:04.980259   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | exit 0
	I0130 20:39:05.079758   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | SSH cmd err, output: <nil>: 
	I0130 20:39:05.080092   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetConfigRaw
	I0130 20:39:05.080846   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:05.083636   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.084019   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.084062   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.084354   45819 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/config.json ...
	I0130 20:39:05.084608   45819 machine.go:88] provisioning docker machine ...
	I0130 20:39:05.084635   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:05.084845   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetMachineName
	I0130 20:39:05.085031   45819 buildroot.go:166] provisioning hostname "old-k8s-version-150971"
	I0130 20:39:05.085063   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetMachineName
	I0130 20:39:05.085221   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.087561   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.087930   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.087963   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.088067   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.088220   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.088384   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.088550   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.088736   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:05.089124   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:05.089141   45819 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-150971 && echo "old-k8s-version-150971" | sudo tee /etc/hostname
	I0130 20:39:05.232496   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-150971
	
	I0130 20:39:05.232528   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.234898   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.235190   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.235227   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.235310   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.235515   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.235655   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.235791   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.235921   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:05.236233   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:05.236251   45819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-150971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-150971/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-150971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:39:05.370716   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:39:05.370753   45819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:39:05.370774   45819 buildroot.go:174] setting up certificates
	I0130 20:39:05.370787   45819 provision.go:83] configureAuth start
	I0130 20:39:05.370800   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetMachineName
	I0130 20:39:05.371158   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:05.373602   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.373946   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.373970   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.374153   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.376230   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.376617   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.376657   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.376763   45819 provision.go:138] copyHostCerts
	I0130 20:39:05.376816   45819 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:39:05.376826   45819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:39:05.376892   45819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:39:05.377066   45819 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:39:05.377079   45819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:39:05.377113   45819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:39:05.377205   45819 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:39:05.377216   45819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:39:05.377243   45819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:39:05.377336   45819 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-150971 san=[192.168.39.16 192.168.39.16 localhost 127.0.0.1 minikube old-k8s-version-150971]
	I0130 20:39:05.649128   45819 provision.go:172] copyRemoteCerts
	I0130 20:39:05.649183   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:39:05.649206   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.652019   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.652353   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.652385   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.652657   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.652857   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.653048   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.653207   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:05.753981   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0130 20:39:05.782847   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 20:39:05.810083   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:39:05.836967   45819 provision.go:86] duration metric: configureAuth took 466.16712ms
	I0130 20:39:05.836989   45819 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:39:05.837156   45819 config.go:182] Loaded profile config "old-k8s-version-150971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 20:39:05.837222   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.840038   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.840384   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.840422   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.840597   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.840832   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.841019   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.841167   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.841338   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:05.841681   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:05.841700   45819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:39:06.170121   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:39:06.170151   45819 machine.go:91] provisioned docker machine in 1.08552444s
	I0130 20:39:06.170163   45819 start.go:300] post-start starting for "old-k8s-version-150971" (driver="kvm2")
	I0130 20:39:06.170183   45819 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:39:06.170202   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.170544   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:39:06.170583   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.173794   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.174165   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.174192   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.174421   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.174620   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.174804   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.174947   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:06.273272   45819 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:39:06.277900   45819 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:39:06.277928   45819 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:39:06.278010   45819 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:39:06.278099   45819 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:39:06.278207   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:39:06.286905   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:06.311772   45819 start.go:303] post-start completed in 141.592454ms
	I0130 20:39:06.311808   45819 fix.go:56] fixHost completed within 20.175639407s
	I0130 20:39:06.311832   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.314627   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.314998   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.315027   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.315179   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.315402   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.315585   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.315758   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.315936   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:06.316254   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:06.316269   45819 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:39:06.448193   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647146.389757507
	
	I0130 20:39:06.448219   45819 fix.go:206] guest clock: 1706647146.389757507
	I0130 20:39:06.448230   45819 fix.go:219] Guest: 2024-01-30 20:39:06.389757507 +0000 UTC Remote: 2024-01-30 20:39:06.311812895 +0000 UTC m=+176.717060563 (delta=77.944612ms)
	I0130 20:39:06.448277   45819 fix.go:190] guest clock delta is within tolerance: 77.944612ms
	I0130 20:39:06.448285   45819 start.go:83] releasing machines lock for "old-k8s-version-150971", held for 20.312150878s
	I0130 20:39:06.448318   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.448584   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:06.451978   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.452448   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.452475   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.452632   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.453188   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.453364   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.453450   45819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:39:06.453501   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.453604   45819 ssh_runner.go:195] Run: cat /version.json
	I0130 20:39:06.453622   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.456426   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.456694   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.456722   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.456743   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.457015   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.457218   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.457228   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.457266   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.457473   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.457483   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.457648   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.457657   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:06.457834   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.457945   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:06.575025   45819 ssh_runner.go:195] Run: systemctl --version
	I0130 20:39:06.580884   45819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:39:06.730119   45819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:39:06.737872   45819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:39:06.737945   45819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:39:06.752952   45819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:39:06.752987   45819 start.go:475] detecting cgroup driver to use...
	I0130 20:39:06.753062   45819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:39:06.772925   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:39:06.787880   45819 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:39:06.787957   45819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:39:06.805662   45819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:39:06.820819   45819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:39:06.941809   45819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:39:07.067216   45819 docker.go:233] disabling docker service ...
	I0130 20:39:07.067299   45819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:39:07.084390   45819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:39:07.099373   45819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:39:07.242239   45819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:39:07.378297   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:39:07.390947   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:39:07.414177   45819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0130 20:39:07.414256   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.427074   45819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:39:07.427154   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.439058   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.451547   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.462473   45819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:39:07.474082   45819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:39:07.484883   45819 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:39:07.484943   45819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:39:07.502181   45819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:39:07.511315   45819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:39:07.677114   45819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:39:07.878176   45819 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:39:07.878247   45819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:39:07.885855   45819 start.go:543] Will wait 60s for crictl version
	I0130 20:39:07.885918   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:07.895480   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:39:07.946256   45819 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:39:07.946344   45819 ssh_runner.go:195] Run: crio --version
	I0130 20:39:07.999647   45819 ssh_runner.go:195] Run: crio --version
	I0130 20:39:08.064335   45819 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0130 20:39:04.270868   45037 pod_ready.go:92] pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:04.270900   45037 pod_ready.go:81] duration metric: took 4.508624463s waiting for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.270911   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.276806   45037 pod_ready.go:92] pod "kube-proxy-g7q5t" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:04.276830   45037 pod_ready.go:81] duration metric: took 5.914142ms waiting for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.276839   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.283207   45037 pod_ready.go:92] pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:04.283225   45037 pod_ready.go:81] duration metric: took 6.380407ms waiting for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.283235   45037 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:06.291591   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:08.318273   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:08.065754   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:08.068986   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:08.069433   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:08.069477   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:08.069665   45819 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 20:39:08.074101   45819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:08.088404   45819 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 20:39:08.088468   45819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:39:08.133749   45819 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0130 20:39:08.133831   45819 ssh_runner.go:195] Run: which lz4
	I0130 20:39:08.138114   45819 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 20:39:08.142668   45819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:39:08.142709   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0130 20:39:05.487066   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:05.987386   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:06.486465   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:06.987491   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:07.486540   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:07.518826   45441 api_server.go:72] duration metric: took 2.532536561s to wait for apiserver process to appear ...
	I0130 20:39:07.518852   45441 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:39:07.518875   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:06.475902   44923 main.go:141] libmachine: (no-preload-473743) Calling .Start
	I0130 20:39:06.476095   44923 main.go:141] libmachine: (no-preload-473743) Ensuring networks are active...
	I0130 20:39:06.476929   44923 main.go:141] libmachine: (no-preload-473743) Ensuring network default is active
	I0130 20:39:06.477344   44923 main.go:141] libmachine: (no-preload-473743) Ensuring network mk-no-preload-473743 is active
	I0130 20:39:06.477817   44923 main.go:141] libmachine: (no-preload-473743) Getting domain xml...
	I0130 20:39:06.478643   44923 main.go:141] libmachine: (no-preload-473743) Creating domain...
	I0130 20:39:07.834909   44923 main.go:141] libmachine: (no-preload-473743) Waiting to get IP...
	I0130 20:39:07.835906   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:07.836320   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:07.836371   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:07.836287   46613 retry.go:31] will retry after 205.559104ms: waiting for machine to come up
	I0130 20:39:08.043926   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:08.044522   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:08.044607   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:08.044570   46613 retry.go:31] will retry after 291.055623ms: waiting for machine to come up
	I0130 20:39:08.337157   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:08.337756   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:08.337859   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:08.337823   46613 retry.go:31] will retry after 462.903788ms: waiting for machine to come up
	I0130 20:39:08.802588   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:08.803397   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:08.803497   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:08.803459   46613 retry.go:31] will retry after 497.808285ms: waiting for machine to come up
	I0130 20:39:09.303349   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:09.304015   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:09.304037   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:09.303936   46613 retry.go:31] will retry after 569.824748ms: waiting for machine to come up
	I0130 20:39:09.875816   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:09.876316   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:09.876348   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:09.876259   46613 retry.go:31] will retry after 589.654517ms: waiting for machine to come up
	I0130 20:39:10.467029   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:10.467568   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:10.467601   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:10.467520   46613 retry.go:31] will retry after 857.069247ms: waiting for machine to come up
	I0130 20:39:10.796542   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:13.290072   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:09.980254   45819 crio.go:444] Took 1.842164 seconds to copy over tarball
	I0130 20:39:09.980328   45819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 20:39:13.116148   45819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.135783447s)
	I0130 20:39:13.116184   45819 crio.go:451] Took 3.135904 seconds to extract the tarball
	I0130 20:39:13.116196   45819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 20:39:13.161285   45819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:39:13.226970   45819 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0130 20:39:13.227008   45819 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 20:39:13.227096   45819 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.227151   45819 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.227153   45819 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.227173   45819 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.227121   45819 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:13.227155   45819 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0130 20:39:13.227439   45819 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.227117   45819 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.229003   45819 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.229038   45819 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:13.229065   45819 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.229112   45819 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.229011   45819 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0130 20:39:13.229170   45819 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.229177   45819 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.229217   45819 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.443441   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.484878   45819 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0130 20:39:13.484941   45819 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.485021   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.489291   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.526847   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.526966   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.527312   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0130 20:39:13.528949   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.532002   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0130 20:39:13.532309   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.532701   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.662312   45819 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0130 20:39:13.662355   45819 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.662422   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.669155   45819 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0130 20:39:13.669201   45819 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.669265   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708339   45819 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0130 20:39:13.708373   45819 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0130 20:39:13.708398   45819 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0130 20:39:13.708404   45819 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.708435   45819 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0130 20:39:13.708470   45819 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.708476   45819 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0130 20:39:13.708491   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.708507   45819 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.708508   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708451   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708443   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708565   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.708549   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.767721   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.767762   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.767789   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0130 20:39:13.767835   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0130 20:39:13.767869   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0130 20:39:13.767935   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.816661   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0130 20:39:13.863740   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0130 20:39:13.863751   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0130 20:39:13.863798   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0130 20:39:14.096216   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:14.241457   45819 cache_images.go:92] LoadImages completed in 1.014424533s
	W0130 20:39:14.241562   45819 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0130 20:39:14.241641   45819 ssh_runner.go:195] Run: crio config
	I0130 20:39:14.307624   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:39:14.307644   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:14.307673   45819 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:39:14.307696   45819 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-150971 NodeName:old-k8s-version-150971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0130 20:39:14.307866   45819 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-150971"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-150971
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.16:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:39:14.307973   45819 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-150971 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:39:14.308042   45819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0130 20:39:14.318757   45819 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:39:14.318830   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:39:14.329640   45819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0130 20:39:14.347498   45819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:39:14.365403   45819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0130 20:39:14.383846   45819 ssh_runner.go:195] Run: grep 192.168.39.16	control-plane.minikube.internal$ /etc/hosts
	I0130 20:39:14.388138   45819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:14.402420   45819 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971 for IP: 192.168.39.16
	I0130 20:39:14.402483   45819 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:39:14.402661   45819 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:39:14.402707   45819 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:39:14.402780   45819 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.key
	I0130 20:39:14.402837   45819 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/apiserver.key.5918fcb3
	I0130 20:39:14.402877   45819 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/proxy-client.key
	I0130 20:39:14.403025   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:39:14.403076   45819 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:39:14.403094   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:39:14.403131   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:39:14.403171   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:39:14.403206   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:39:14.403290   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:14.404157   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:39:14.430902   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 20:39:14.454554   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:39:14.482335   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 20:39:14.505963   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:39:14.532616   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:39:14.558930   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:39:14.585784   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:39:14.609214   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:39:14.635743   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:39:12.268901   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:12.268934   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:12.268948   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:12.307051   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:12.307093   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:12.519619   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:12.530857   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:12.530904   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:13.019370   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:13.024544   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:13.024577   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:13.519023   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:13.525748   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:13.525784   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:14.019318   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:14.026570   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:14.026600   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:14.519000   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:15.074306   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:15.074341   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:15.074353   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:15.081035   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:15.081075   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:11.325993   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:11.326475   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:11.326506   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:11.326439   46613 retry.go:31] will retry after 994.416536ms: waiting for machine to come up
	I0130 20:39:12.323190   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:12.323897   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:12.323924   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:12.323807   46613 retry.go:31] will retry after 1.746704262s: waiting for machine to come up
	I0130 20:39:14.072583   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:14.073100   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:14.073158   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:14.073072   46613 retry.go:31] will retry after 2.322781715s: waiting for machine to come up
	I0130 20:39:15.519005   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:15.609496   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:15.609529   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:16.018990   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:16.024390   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 200:
	ok
	I0130 20:39:16.037151   45441 api_server.go:141] control plane version: v1.28.4
	I0130 20:39:16.037191   45441 api_server.go:131] duration metric: took 8.518327222s to wait for apiserver health ...
	I0130 20:39:16.037203   45441 cni.go:84] Creating CNI manager for ""
	I0130 20:39:16.037211   45441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:16.039114   45441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:39:15.290788   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:17.292552   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:14.662372   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:39:14.814291   45819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:39:14.832453   45819 ssh_runner.go:195] Run: openssl version
	I0130 20:39:14.838238   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:39:14.848628   45819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:39:14.853713   45819 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:39:14.853761   45819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:39:14.859768   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:39:14.870658   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:39:14.881444   45819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:14.886241   45819 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:14.886290   45819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:14.892197   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:39:14.903459   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:39:14.914463   45819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:39:14.919337   45819 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:39:14.919413   45819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:39:14.925258   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:39:14.935893   45819 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:39:14.941741   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:39:14.948871   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:39:14.955038   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:39:14.961605   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:39:14.967425   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:39:14.973845   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:39:14.980072   45819 kubeadm.go:404] StartCluster: {Name:old-k8s-version-150971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:39:14.980218   45819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:39:14.980265   45819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:15.021821   45819 cri.go:89] found id: ""
	I0130 20:39:15.021920   45819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:39:15.033604   45819 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:39:15.033629   45819 kubeadm.go:636] restartCluster start
	I0130 20:39:15.033686   45819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:39:15.044324   45819 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:15.045356   45819 kubeconfig.go:92] found "old-k8s-version-150971" server: "https://192.168.39.16:8443"
	I0130 20:39:15.047610   45819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:39:15.057690   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:15.057746   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:15.073207   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:15.558392   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:15.558480   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:15.574711   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:16.057794   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:16.057971   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:16.073882   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:16.557808   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:16.557879   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:16.571659   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:17.057817   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:17.057922   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:17.074250   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:17.557727   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:17.557809   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:17.573920   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:18.058504   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:18.058573   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:18.070636   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:18.558163   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:18.558262   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:18.570781   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:19.058321   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:19.058414   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:19.074887   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:19.558503   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:19.558596   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:19.570666   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:16.040606   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:39:16.065469   45441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:39:16.099751   45441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:39:16.113444   45441 system_pods.go:59] 8 kube-system pods found
	I0130 20:39:16.113486   45441 system_pods.go:61] "coredns-5dd5756b68-2955f" [abae9f5c-ed48-494b-b014-a28f6290d772] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:39:16.113498   45441 system_pods.go:61] "etcd-default-k8s-diff-port-877742" [0f69a8d9-5549-4f3a-8b12-ee9e96e08271] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 20:39:16.113509   45441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-877742" [ab6cf2c3-cc75-44b8-b039-6e21881a9ade] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 20:39:16.113519   45441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-877742" [4b313734-cd1e-4229-afcd-4d0b517594ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 20:39:16.113533   45441 system_pods.go:61] "kube-proxy-s9ssn" [ea1c69e6-d319-41ee-a47f-4937f03ecdc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 20:39:16.113549   45441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-877742" [3f4d9e5f-1421-4576-839b-3bdfba56700b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 20:39:16.113566   45441 system_pods.go:61] "metrics-server-57f55c9bc5-hzfwg" [1e06ac92-f7ff-418a-9a8d-72d763566bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:39:16.113582   45441 system_pods.go:61] "storage-provisioner" [4cf793ab-e7a5-4a51-bcb9-a07bea323a44] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:39:16.113599   45441 system_pods.go:74] duration metric: took 13.827445ms to wait for pod list to return data ...
	I0130 20:39:16.113608   45441 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:39:16.121786   45441 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:39:16.121882   45441 node_conditions.go:123] node cpu capacity is 2
	I0130 20:39:16.121904   45441 node_conditions.go:105] duration metric: took 8.289345ms to run NodePressure ...
	I0130 20:39:16.121929   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:16.440112   45441 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:39:16.447160   45441 kubeadm.go:787] kubelet initialised
	I0130 20:39:16.447188   45441 kubeadm.go:788] duration metric: took 7.04624ms waiting for restarted kubelet to initialise ...
	I0130 20:39:16.447198   45441 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:39:16.457164   45441 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-2955f" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:16.463990   45441 pod_ready.go:97] node "default-k8s-diff-port-877742" hosting pod "coredns-5dd5756b68-2955f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.464020   45441 pod_ready.go:81] duration metric: took 6.825543ms waiting for pod "coredns-5dd5756b68-2955f" in "kube-system" namespace to be "Ready" ...
	E0130 20:39:16.464033   45441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-877742" hosting pod "coredns-5dd5756b68-2955f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.464044   45441 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:16.476983   45441 pod_ready.go:97] node "default-k8s-diff-port-877742" hosting pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.477077   45441 pod_ready.go:81] duration metric: took 12.988392ms waiting for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	E0130 20:39:16.477109   45441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-877742" hosting pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.477128   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:18.486083   45441 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:16.397588   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:16.398050   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:16.398082   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:16.397988   46613 retry.go:31] will retry after 2.411227582s: waiting for machine to come up
	I0130 20:39:18.810874   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:18.811404   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:18.811439   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:18.811358   46613 retry.go:31] will retry after 2.231016506s: waiting for machine to come up
	I0130 20:39:19.296383   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:21.790307   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:20.058718   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:20.058800   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:20.074443   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:20.558683   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:20.558756   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:20.574765   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:21.058367   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:21.058456   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:21.074652   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:21.558528   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:21.558648   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:21.573650   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:22.058161   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:22.058280   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:22.070780   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:22.558448   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:22.558525   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:22.572220   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:23.057797   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:23.057884   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:23.071260   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:23.558193   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:23.558278   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:23.571801   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:24.058483   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:24.058603   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:24.070898   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:24.558465   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:24.558546   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:24.573966   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:21.008056   45441 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:23.484244   45441 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:23.987592   45441 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:23.987615   45441 pod_ready.go:81] duration metric: took 7.510477497s waiting for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.987624   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.993335   45441 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:23.993358   45441 pod_ready.go:81] duration metric: took 5.726687ms waiting for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.993373   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s9ssn" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.998021   45441 pod_ready.go:92] pod "kube-proxy-s9ssn" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:23.998045   45441 pod_ready.go:81] duration metric: took 4.664039ms waiting for pod "kube-proxy-s9ssn" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.998057   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:21.044853   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:21.045392   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:21.045423   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:21.045336   46613 retry.go:31] will retry after 3.525646558s: waiting for machine to come up
	I0130 20:39:24.573139   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:24.573573   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:24.573596   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:24.573532   46613 retry.go:31] will retry after 4.365207536s: waiting for machine to come up
	I0130 20:39:23.790893   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:25.791630   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:28.291352   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:25.058653   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:25.058753   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:25.072061   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:25.072091   45819 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:39:25.072115   45819 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:39:25.072127   45819 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:39:25.072183   45819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:25.121788   45819 cri.go:89] found id: ""
	I0130 20:39:25.121863   45819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:39:25.137294   45819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:39:25.146157   45819 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:39:25.146213   45819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:25.155323   45819 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:25.155346   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:25.279765   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:26.617419   45819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.337617183s)
	I0130 20:39:26.617457   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:26.825384   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:26.916818   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:27.026546   45819 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:39:27.026647   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:27.527637   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:28.026724   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:28.527352   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:28.578771   45819 api_server.go:72] duration metric: took 1.552227614s to wait for apiserver process to appear ...
	I0130 20:39:28.578793   45819 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:39:28.578814   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:28.579348   45819 api_server.go:269] stopped: https://192.168.39.16:8443/healthz: Get "https://192.168.39.16:8443/healthz": dial tcp 192.168.39.16:8443: connect: connection refused
	I0130 20:39:29.078918   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:26.006018   45441 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:27.506562   45441 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:27.506596   45441 pod_ready.go:81] duration metric: took 3.50852897s waiting for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:27.506609   45441 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:29.514067   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:28.941922   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.942489   44923 main.go:141] libmachine: (no-preload-473743) Found IP for machine: 192.168.50.220
	I0130 20:39:28.942511   44923 main.go:141] libmachine: (no-preload-473743) Reserving static IP address...
	I0130 20:39:28.942528   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has current primary IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.943003   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "no-preload-473743", mac: "52:54:00:c5:07:4a", ip: "192.168.50.220"} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:28.943046   44923 main.go:141] libmachine: (no-preload-473743) DBG | skip adding static IP to network mk-no-preload-473743 - found existing host DHCP lease matching {name: "no-preload-473743", mac: "52:54:00:c5:07:4a", ip: "192.168.50.220"}
	I0130 20:39:28.943063   44923 main.go:141] libmachine: (no-preload-473743) Reserved static IP address: 192.168.50.220
	I0130 20:39:28.943081   44923 main.go:141] libmachine: (no-preload-473743) DBG | Getting to WaitForSSH function...
	I0130 20:39:28.943092   44923 main.go:141] libmachine: (no-preload-473743) Waiting for SSH to be available...
	I0130 20:39:28.945617   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.945983   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:28.946016   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.946192   44923 main.go:141] libmachine: (no-preload-473743) DBG | Using SSH client type: external
	I0130 20:39:28.946224   44923 main.go:141] libmachine: (no-preload-473743) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa (-rw-------)
	I0130 20:39:28.946257   44923 main.go:141] libmachine: (no-preload-473743) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:39:28.946268   44923 main.go:141] libmachine: (no-preload-473743) DBG | About to run SSH command:
	I0130 20:39:28.946279   44923 main.go:141] libmachine: (no-preload-473743) DBG | exit 0
	I0130 20:39:29.047127   44923 main.go:141] libmachine: (no-preload-473743) DBG | SSH cmd err, output: <nil>: 
	I0130 20:39:29.047505   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetConfigRaw
	I0130 20:39:29.048239   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:29.051059   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.051539   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.051572   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.051875   44923 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/config.json ...
	I0130 20:39:29.052098   44923 machine.go:88] provisioning docker machine ...
	I0130 20:39:29.052122   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:29.052328   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetMachineName
	I0130 20:39:29.052480   44923 buildroot.go:166] provisioning hostname "no-preload-473743"
	I0130 20:39:29.052503   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetMachineName
	I0130 20:39:29.052693   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.055532   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.055937   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.055968   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.056075   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.056267   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.056428   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.056644   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.056802   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:29.057242   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:29.057266   44923 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-473743 && echo "no-preload-473743" | sudo tee /etc/hostname
	I0130 20:39:29.199944   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-473743
	
	I0130 20:39:29.199987   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.202960   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.203402   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.203428   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.203648   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.203840   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.203974   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.204101   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.204253   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:29.204787   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:29.204815   44923 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-473743' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-473743/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-473743' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:39:29.343058   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:39:29.343090   44923 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:39:29.343118   44923 buildroot.go:174] setting up certificates
	I0130 20:39:29.343131   44923 provision.go:83] configureAuth start
	I0130 20:39:29.343154   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetMachineName
	I0130 20:39:29.343457   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:29.346265   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.346671   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.346714   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.346889   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.349402   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.349799   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.349830   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.350015   44923 provision.go:138] copyHostCerts
	I0130 20:39:29.350079   44923 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:39:29.350092   44923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:39:29.350146   44923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:39:29.350244   44923 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:39:29.350253   44923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:39:29.350277   44923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:39:29.350343   44923 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:39:29.350354   44923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:39:29.350371   44923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:39:29.350428   44923 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.no-preload-473743 san=[192.168.50.220 192.168.50.220 localhost 127.0.0.1 minikube no-preload-473743]
	I0130 20:39:29.671070   44923 provision.go:172] copyRemoteCerts
	I0130 20:39:29.671125   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:39:29.671150   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.673890   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.674199   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.674234   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.674386   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.674604   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.674744   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.674901   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:29.769184   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:39:29.797035   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 20:39:29.822932   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 20:39:29.849781   44923 provision.go:86] duration metric: configureAuth took 506.627652ms
	I0130 20:39:29.849818   44923 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:39:29.850040   44923 config.go:182] Loaded profile config "no-preload-473743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 20:39:29.850134   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.852709   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.853108   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.853137   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.853331   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.853574   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.853757   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.853924   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.854108   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:29.854635   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:29.854660   44923 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:39:30.232249   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:39:30.232288   44923 machine.go:91] provisioned docker machine in 1.180174143s
	I0130 20:39:30.232302   44923 start.go:300] post-start starting for "no-preload-473743" (driver="kvm2")
	I0130 20:39:30.232321   44923 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:39:30.232348   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.232668   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:39:30.232705   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.235383   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.235716   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.235745   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.235860   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.236049   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.236203   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.236346   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:30.332330   44923 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:39:30.337659   44923 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:39:30.337684   44923 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:39:30.337756   44923 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:39:30.337847   44923 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:39:30.337960   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:39:30.349830   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:30.374759   44923 start.go:303] post-start completed in 142.443985ms
	I0130 20:39:30.374780   44923 fix.go:56] fixHost completed within 23.926338591s
	I0130 20:39:30.374800   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.377807   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.378189   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.378244   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.378414   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.378605   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.378803   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.378954   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.379112   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:30.379649   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:30.379677   44923 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:39:30.512888   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647170.453705676
	
	I0130 20:39:30.512916   44923 fix.go:206] guest clock: 1706647170.453705676
	I0130 20:39:30.512925   44923 fix.go:219] Guest: 2024-01-30 20:39:30.453705676 +0000 UTC Remote: 2024-01-30 20:39:30.374783103 +0000 UTC m=+364.620017880 (delta=78.922573ms)
	I0130 20:39:30.512966   44923 fix.go:190] guest clock delta is within tolerance: 78.922573ms
	I0130 20:39:30.512976   44923 start.go:83] releasing machines lock for "no-preload-473743", held for 24.064563389s
	I0130 20:39:30.513083   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.513387   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:30.516359   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.516699   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.516728   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.516908   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.517590   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.517747   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.517817   44923 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:39:30.517864   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.517954   44923 ssh_runner.go:195] Run: cat /version.json
	I0130 20:39:30.517972   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.520814   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521070   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521202   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.521228   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521456   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.521480   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521480   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.521682   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.521722   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.521844   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.521845   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.521997   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:30.522149   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.522424   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:30.632970   44923 ssh_runner.go:195] Run: systemctl --version
	I0130 20:39:30.638936   44923 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:39:30.784288   44923 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:39:30.792079   44923 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:39:30.792150   44923 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:39:30.809394   44923 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:39:30.809421   44923 start.go:475] detecting cgroup driver to use...
	I0130 20:39:30.809496   44923 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:39:30.824383   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:39:30.838710   44923 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:39:30.838765   44923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:39:30.852928   44923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:39:30.867162   44923 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:39:30.995737   44923 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:39:31.113661   44923 docker.go:233] disabling docker service ...
	I0130 20:39:31.113726   44923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:39:31.127737   44923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:39:31.139320   44923 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:39:31.240000   44923 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:39:31.340063   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:39:31.353303   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:39:31.371834   44923 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:39:31.371889   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.382579   44923 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:39:31.382639   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.392544   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.403023   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.413288   44923 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:39:31.423806   44923 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:39:31.433817   44923 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:39:31.433884   44923 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:39:31.447456   44923 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:39:31.457035   44923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:39:31.562847   44923 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:39:31.752772   44923 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:39:31.752844   44923 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:39:31.757880   44923 start.go:543] Will wait 60s for crictl version
	I0130 20:39:31.757943   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:31.761967   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:39:31.800658   44923 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:39:31.800725   44923 ssh_runner.go:195] Run: crio --version
	I0130 20:39:31.852386   44923 ssh_runner.go:195] Run: crio --version
	I0130 20:39:31.910758   44923 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0130 20:39:30.791795   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:33.292307   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:34.079616   45819 api_server.go:269] stopped: https://192.168.39.16:8443/healthz: Get "https://192.168.39.16:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0130 20:39:34.079674   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:31.516571   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:33.517547   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:31.912241   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:31.915377   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:31.915705   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:31.915735   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:31.915985   44923 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0130 20:39:31.920870   44923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:31.934252   44923 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 20:39:31.934304   44923 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:39:31.975687   44923 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0130 20:39:31.975714   44923 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 20:39:31.975762   44923 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:31.975874   44923 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:31.975900   44923 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:31.975936   44923 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0130 20:39:31.975959   44923 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:31.975903   44923 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:31.976051   44923 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:31.976063   44923 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:31.977466   44923 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:31.977485   44923 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:31.977525   44923 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0130 20:39:31.977531   44923 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:31.977569   44923 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:31.977559   44923 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:31.977663   44923 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:31.977812   44923 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:32.130396   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0130 20:39:32.132105   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:32.135297   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:32.135817   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:32.136698   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:32.154928   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:32.172264   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:32.355420   44923 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0130 20:39:32.355504   44923 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:32.355537   44923 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0130 20:39:32.355580   44923 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:32.355454   44923 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0130 20:39:32.355636   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355675   44923 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:32.355606   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355724   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355763   44923 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0130 20:39:32.355803   44923 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:32.355844   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355855   44923 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0130 20:39:32.355884   44923 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:32.355805   44923 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0130 20:39:32.355928   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355929   44923 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:32.355974   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.360081   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:32.370164   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:32.370202   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:32.370243   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:32.370259   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:32.370374   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:32.466609   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.466714   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.503174   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:32.503293   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:32.507888   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0130 20:39:32.507963   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0130 20:39:32.508061   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0130 20:39:32.508061   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0130 20:39:32.518772   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:32.518883   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0130 20:39:32.518906   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.518932   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:32.518951   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.518824   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0130 20:39:32.518996   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0130 20:39:32.519041   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 20:39:32.521450   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0130 20:39:32.521493   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0130 20:39:32.848844   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:34.579929   44923 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.060972543s)
	I0130 20:39:34.579971   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0130 20:39:34.580001   44923 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.060936502s)
	I0130 20:39:34.580034   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0130 20:39:34.580045   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.061073363s)
	I0130 20:39:34.580059   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0130 20:39:34.580082   44923 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.731208309s)
	I0130 20:39:34.580132   44923 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0130 20:39:34.580088   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:34.580225   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:34.580173   44923 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:34.580343   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:34.585311   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:34.796586   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:34.796615   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:34.796633   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:34.846035   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:34.846071   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:35.079544   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:35.091673   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 20:39:35.091710   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 20:39:35.579233   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:35.587045   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 20:39:35.587071   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 20:39:36.079775   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:36.086927   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0130 20:39:36.095953   45819 api_server.go:141] control plane version: v1.16.0
	I0130 20:39:36.095976   45819 api_server.go:131] duration metric: took 7.517178171s to wait for apiserver health ...
	I0130 20:39:36.095985   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:39:36.095992   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:36.097742   45819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:39:35.792385   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:37.792648   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:36.099012   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:39:36.108427   45819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:39:36.126083   45819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:39:36.138855   45819 system_pods.go:59] 8 kube-system pods found
	I0130 20:39:36.138882   45819 system_pods.go:61] "coredns-5644d7b6d9-547k4" [6b1119fe-9c8a-44fb-b813-58271228b290] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:39:36.138888   45819 system_pods.go:61] "coredns-5644d7b6d9-dtfzh" [4cbd4f36-bc01-4f55-ba50-b7dcdcb35b9b] Running
	I0130 20:39:36.138894   45819 system_pods.go:61] "etcd-old-k8s-version-150971" [22eeed2c-7454-4b9d-8b4d-1c9a2e5feaf7] Running
	I0130 20:39:36.138899   45819 system_pods.go:61] "kube-apiserver-old-k8s-version-150971" [5ef062e6-2f78-485f-9420-e8714128e39f] Running
	I0130 20:39:36.138903   45819 system_pods.go:61] "kube-controller-manager-old-k8s-version-150971" [4e5df6df-486e-47a8-89b8-8d962212ec3e] Running
	I0130 20:39:36.138907   45819 system_pods.go:61] "kube-proxy-ncl7z" [51b28456-0070-46fc-b647-e28d6bdcfde2] Running
	I0130 20:39:36.138914   45819 system_pods.go:61] "kube-scheduler-old-k8s-version-150971" [384c4dfa-180b-4ec3-9e12-3c6d9e581c0e] Running
	I0130 20:39:36.138918   45819 system_pods.go:61] "storage-provisioner" [8a75a04c-1b80-41f6-9dfd-a7ee6f908b9d] Running
	I0130 20:39:36.138928   45819 system_pods.go:74] duration metric: took 12.820934ms to wait for pod list to return data ...
	I0130 20:39:36.138936   45819 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:39:36.142193   45819 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:39:36.142224   45819 node_conditions.go:123] node cpu capacity is 2
	I0130 20:39:36.142236   45819 node_conditions.go:105] duration metric: took 3.295582ms to run NodePressure ...
	I0130 20:39:36.142256   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:36.480656   45819 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:39:36.486153   45819 retry.go:31] will retry after 323.854639ms: kubelet not initialised
	I0130 20:39:36.816707   45819 retry.go:31] will retry after 303.422684ms: kubelet not initialised
	I0130 20:39:37.125369   45819 retry.go:31] will retry after 697.529029ms: kubelet not initialised
	I0130 20:39:37.829322   45819 retry.go:31] will retry after 626.989047ms: kubelet not initialised
	I0130 20:39:38.463635   45819 retry.go:31] will retry after 1.390069174s: kubelet not initialised
	I0130 20:39:35.519218   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:38.013652   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:40.014621   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:37.168054   44923 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.582708254s)
	I0130 20:39:37.168111   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0130 20:39:37.168188   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.587929389s)
	I0130 20:39:37.168204   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0130 20:39:37.168226   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0130 20:39:37.168257   44923 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0130 20:39:37.168330   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0130 20:39:37.173865   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0130 20:39:39.259662   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.091304493s)
	I0130 20:39:39.259692   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0130 20:39:39.259719   44923 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0130 20:39:39.259777   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0130 20:39:40.291441   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:42.292550   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:39.861179   45819 retry.go:31] will retry after 1.194254513s: kubelet not initialised
	I0130 20:39:41.067315   45819 retry.go:31] will retry after 3.766341089s: kubelet not initialised
	I0130 20:39:42.016919   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:44.514681   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:43.804203   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.54440472s)
	I0130 20:39:43.804228   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0130 20:39:43.804262   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:43.804360   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:44.790577   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:46.791751   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:44.839501   45819 retry.go:31] will retry after 2.957753887s: kubelet not initialised
	I0130 20:39:47.802749   45819 retry.go:31] will retry after 4.750837771s: kubelet not initialised
	I0130 20:39:47.016112   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:49.517716   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:46.385349   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.580960989s)
	I0130 20:39:46.385378   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0130 20:39:46.385403   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 20:39:46.385446   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 20:39:48.570468   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.184994355s)
	I0130 20:39:48.570504   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0130 20:39:48.570529   44923 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0130 20:39:48.570575   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0130 20:39:49.318398   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0130 20:39:49.318449   44923 cache_images.go:123] Successfully loaded all cached images
	I0130 20:39:49.318457   44923 cache_images.go:92] LoadImages completed in 17.342728639s
	I0130 20:39:49.318542   44923 ssh_runner.go:195] Run: crio config
	I0130 20:39:49.393074   44923 cni.go:84] Creating CNI manager for ""
	I0130 20:39:49.393094   44923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:49.393116   44923 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:39:49.393143   44923 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.220 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-473743 NodeName:no-preload-473743 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:39:49.393301   44923 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-473743"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:39:49.393384   44923 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-473743 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-473743 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:39:49.393445   44923 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0130 20:39:49.403506   44923 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:39:49.403582   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:39:49.412473   44923 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0130 20:39:49.429600   44923 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0130 20:39:49.445613   44923 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0130 20:39:49.462906   44923 ssh_runner.go:195] Run: grep 192.168.50.220	control-plane.minikube.internal$ /etc/hosts
	I0130 20:39:49.466844   44923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:49.479363   44923 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743 for IP: 192.168.50.220
	I0130 20:39:49.479388   44923 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:39:49.479540   44923 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:39:49.479599   44923 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:39:49.479682   44923 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.key
	I0130 20:39:49.479766   44923 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/apiserver.key.ef9da43a
	I0130 20:39:49.479832   44923 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/proxy-client.key
	I0130 20:39:49.479984   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:39:49.480020   44923 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:39:49.480031   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:39:49.480052   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:39:49.480082   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:39:49.480104   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:39:49.480148   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:49.480782   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:39:49.504588   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 20:39:49.530340   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:39:49.552867   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 20:39:49.575974   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:39:49.598538   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:39:49.623489   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:39:49.646965   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:39:49.671998   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:39:49.695493   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:39:49.718975   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:39:49.741793   44923 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:39:49.758291   44923 ssh_runner.go:195] Run: openssl version
	I0130 20:39:49.765053   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:39:49.775428   44923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:39:49.780081   44923 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:39:49.780130   44923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:39:49.785510   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:39:49.797983   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:39:49.807934   44923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:39:49.812367   44923 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:39:49.812416   44923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:39:49.818021   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:39:49.827603   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:39:49.837248   44923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:49.841789   44923 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:49.841838   44923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:49.847684   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:39:49.857387   44923 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:39:49.862411   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:39:49.871862   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:39:49.877904   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:39:49.883820   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:39:49.890534   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:39:49.898143   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:39:49.905607   44923 kubeadm.go:404] StartCluster: {Name:no-preload-473743 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-473743 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.220 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:39:49.905713   44923 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:39:49.905768   44923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:49.956631   44923 cri.go:89] found id: ""
	I0130 20:39:49.956705   44923 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:39:49.967500   44923 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:39:49.967516   44923 kubeadm.go:636] restartCluster start
	I0130 20:39:49.967572   44923 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:39:49.977077   44923 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:49.978191   44923 kubeconfig.go:92] found "no-preload-473743" server: "https://192.168.50.220:8443"
	I0130 20:39:49.980732   44923 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:39:49.990334   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:49.990377   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:50.001427   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:50.491017   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:50.491080   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:50.503162   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:48.792438   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:51.290002   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:53.291511   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:52.558586   45819 retry.go:31] will retry after 13.209460747s: kubelet not initialised
	I0130 20:39:52.013659   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:54.013756   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:50.991212   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:50.991312   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:51.004155   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:51.491296   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:51.491369   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:51.502771   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:51.991398   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:51.991529   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:52.004164   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:52.490700   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:52.490817   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:52.504616   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:52.991009   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:52.991101   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:53.004178   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:53.490804   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:53.490897   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:53.502856   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:53.990345   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:53.990451   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:54.003812   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:54.491414   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:54.491522   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:54.502969   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:54.991126   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:54.991212   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:55.003001   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:55.490521   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:55.490609   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:55.501901   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:55.791198   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:58.289750   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:56.513098   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:58.514459   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:55.990820   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:55.990893   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:56.002224   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:56.490338   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:56.490432   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:56.502497   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:56.991097   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:56.991189   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:57.002115   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:57.490604   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:57.490681   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:57.501777   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:57.991320   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:57.991419   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:58.002136   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:58.490641   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:58.490724   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:58.502247   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:58.990830   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:58.990951   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:59.001469   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:59.491109   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:59.491223   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:59.502348   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:59.991097   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:59.991182   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:40:00.002945   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:40:00.002978   44923 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:40:00.002986   44923 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:40:00.002996   44923 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:40:00.003068   44923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:40:00.045168   44923 cri.go:89] found id: ""
	I0130 20:40:00.045245   44923 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:40:00.061704   44923 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:40:00.074448   44923 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:40:00.074505   44923 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:40:00.083478   44923 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:40:00.083502   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:00.200934   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:00.791680   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:02.791880   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:00.515342   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:02.515914   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:05.014585   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:00.824616   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:01.029317   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:01.146596   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:01.232362   44923 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:40:01.232439   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:01.733118   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:02.232964   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:02.732910   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:03.232934   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:03.732852   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:03.758730   44923 api_server.go:72] duration metric: took 2.526367424s to wait for apiserver process to appear ...
	I0130 20:40:03.758768   44923 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:40:03.758786   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:05.290228   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:07.290842   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:07.869847   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:40:07.869897   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:40:07.869919   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:07.986795   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:40:07.986841   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:40:08.259140   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:08.265487   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:40:08.265523   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:40:08.759024   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:08.764138   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:40:08.764163   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:40:09.259821   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:09.265120   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 200:
	ok
	I0130 20:40:09.275933   44923 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 20:40:09.275956   44923 api_server.go:131] duration metric: took 5.517181599s to wait for apiserver health ...
	I0130 20:40:09.275965   44923 cni.go:84] Creating CNI manager for ""
	I0130 20:40:09.275971   44923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:40:09.277688   44923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:40:05.773670   45819 retry.go:31] will retry after 17.341210204s: kubelet not initialised
	I0130 20:40:07.014677   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:09.516836   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:09.279058   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:40:09.307862   44923 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:40:09.339259   44923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:40:09.355136   44923 system_pods.go:59] 8 kube-system pods found
	I0130 20:40:09.355177   44923 system_pods.go:61] "coredns-76f75df574-d4c7t" [a8701b4d-0616-4c05-9ba0-0157adae2d13] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:40:09.355185   44923 system_pods.go:61] "etcd-no-preload-473743" [ed931ab3-95d8-4115-ae97-1c274ed8432d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 20:40:09.355194   44923 system_pods.go:61] "kube-apiserver-no-preload-473743" [64b9b17c-6df5-41db-a308-b0deba016c9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 20:40:09.355201   44923 system_pods.go:61] "kube-controller-manager-no-preload-473743" [a28d8dc6-244a-4dfa-9d7f-468281823332] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 20:40:09.355210   44923 system_pods.go:61] "kube-proxy-zklzt" [fa94d19c-b0d6-4e78-86e8-e6b5f3608753] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 20:40:09.355219   44923 system_pods.go:61] "kube-scheduler-no-preload-473743" [b8f8066b-8644-42c3-b47a-52e34210e410] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 20:40:09.355238   44923 system_pods.go:61] "metrics-server-57f55c9bc5-wzb2g" [cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:40:09.355249   44923 system_pods.go:61] "storage-provisioner" [a257b079-cb6e-45fd-b05d-9ad6fa26225e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:40:09.355256   44923 system_pods.go:74] duration metric: took 15.951624ms to wait for pod list to return data ...
	I0130 20:40:09.355277   44923 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:40:09.361985   44923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:40:09.362014   44923 node_conditions.go:123] node cpu capacity is 2
	I0130 20:40:09.362025   44923 node_conditions.go:105] duration metric: took 6.74245ms to run NodePressure ...
	I0130 20:40:09.362045   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:09.678111   44923 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:40:09.687808   44923 kubeadm.go:787] kubelet initialised
	I0130 20:40:09.687828   44923 kubeadm.go:788] duration metric: took 9.689086ms waiting for restarted kubelet to initialise ...
	I0130 20:40:09.687835   44923 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:09.694574   44923 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.700190   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "coredns-76f75df574-d4c7t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.700214   44923 pod_ready.go:81] duration metric: took 5.613522ms waiting for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.700230   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "coredns-76f75df574-d4c7t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.700237   44923 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.705513   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "etcd-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.705534   44923 pod_ready.go:81] duration metric: took 5.286859ms waiting for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.705545   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "etcd-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.705553   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.710360   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-apiserver-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.710378   44923 pod_ready.go:81] duration metric: took 4.814631ms waiting for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.710388   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-apiserver-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.710396   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.746412   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.746447   44923 pod_ready.go:81] duration metric: took 36.037006ms waiting for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.746460   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.746469   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:10.143330   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-proxy-zklzt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.143364   44923 pod_ready.go:81] duration metric: took 396.879081ms waiting for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:10.143377   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-proxy-zklzt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.143385   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:10.549132   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-scheduler-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.549171   44923 pod_ready.go:81] duration metric: took 405.77755ms waiting for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:10.549192   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-scheduler-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.549201   44923 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:10.942777   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.942802   44923 pod_ready.go:81] duration metric: took 393.589996ms waiting for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:10.942811   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.942817   44923 pod_ready.go:38] duration metric: took 1.254975084s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:10.942834   44923 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:40:10.954894   44923 ops.go:34] apiserver oom_adj: -16
	I0130 20:40:10.954916   44923 kubeadm.go:640] restartCluster took 20.987393757s
	I0130 20:40:10.954926   44923 kubeadm.go:406] StartCluster complete in 21.049328159s
	I0130 20:40:10.954944   44923 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:40:10.955025   44923 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:40:10.956906   44923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:40:10.957249   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:40:10.957343   44923 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:40:10.957411   44923 addons.go:69] Setting storage-provisioner=true in profile "no-preload-473743"
	I0130 20:40:10.957434   44923 addons.go:234] Setting addon storage-provisioner=true in "no-preload-473743"
	I0130 20:40:10.957440   44923 addons.go:69] Setting metrics-server=true in profile "no-preload-473743"
	I0130 20:40:10.957447   44923 config.go:182] Loaded profile config "no-preload-473743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	W0130 20:40:10.957451   44923 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:40:10.957471   44923 addons.go:234] Setting addon metrics-server=true in "no-preload-473743"
	W0130 20:40:10.957481   44923 addons.go:243] addon metrics-server should already be in state true
	I0130 20:40:10.957512   44923 host.go:66] Checking if "no-preload-473743" exists ...
	I0130 20:40:10.957522   44923 host.go:66] Checking if "no-preload-473743" exists ...
	I0130 20:40:10.957946   44923 addons.go:69] Setting default-storageclass=true in profile "no-preload-473743"
	I0130 20:40:10.957911   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.958230   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.958246   44923 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-473743"
	I0130 20:40:10.958477   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.958517   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.958600   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.958621   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.962458   44923 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-473743" context rescaled to 1 replicas
	I0130 20:40:10.962497   44923 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.220 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:40:10.964710   44923 out.go:177] * Verifying Kubernetes components...
	I0130 20:40:10.966259   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:40:10.975195   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45125
	I0130 20:40:10.975661   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.976231   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.976262   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.976885   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.977509   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.977542   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.978199   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37815
	I0130 20:40:10.978220   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35309
	I0130 20:40:10.979039   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.979106   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.979581   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.979600   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.979584   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.979663   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.979964   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.980074   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.980160   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:10.980655   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.980690   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.984068   44923 addons.go:234] Setting addon default-storageclass=true in "no-preload-473743"
	W0130 20:40:10.984119   44923 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:40:10.984155   44923 host.go:66] Checking if "no-preload-473743" exists ...
	I0130 20:40:10.984564   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.984615   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.997126   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44921
	I0130 20:40:10.997598   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.997990   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.998006   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.998355   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.998520   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:10.998838   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37151
	I0130 20:40:10.999186   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.999589   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.999604   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:11.000003   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:11.000289   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:11.000433   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:40:11.002723   44923 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:40:11.001789   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:40:11.004317   44923 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:40:11.004329   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:40:11.004345   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:40:11.005791   44923 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:40:11.007234   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:40:11.007246   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:40:11.007259   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:40:11.006415   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I0130 20:40:11.007375   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.007826   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:11.008219   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:40:11.008258   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.008405   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:40:11.008550   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:11.008566   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:11.008624   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:40:11.008780   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:40:11.008900   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:11.008904   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:40:11.009548   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:11.009578   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:11.010414   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.010713   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:40:11.010733   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.010938   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:40:11.011137   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:40:11.011308   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:40:11.011424   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:40:11.047889   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44097
	I0130 20:40:11.048317   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:11.048800   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:11.048820   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:11.049210   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:11.049451   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:11.051439   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:40:11.052012   44923 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:40:11.052030   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:40:11.052049   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:40:11.055336   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.055865   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:40:11.055888   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.055976   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:40:11.056175   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:40:11.056344   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:40:11.056461   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:40:11.176670   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:40:11.176694   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:40:11.182136   44923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:40:11.194238   44923 node_ready.go:35] waiting up to 6m0s for node "no-preload-473743" to be "Ready" ...
	I0130 20:40:11.194301   44923 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0130 20:40:11.213877   44923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:40:11.222566   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:40:11.222591   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:40:11.264089   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:40:11.264119   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:40:11.337758   44923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:40:12.237415   44923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.055244284s)
	I0130 20:40:12.237483   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.237482   44923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.023570997s)
	I0130 20:40:12.237504   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.237521   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.237538   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.237867   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.237927   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.237949   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.237964   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.237973   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.237986   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.238018   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.238030   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.238303   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.238319   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.238415   44923 main.go:141] libmachine: (no-preload-473743) DBG | Closing plugin on server side
	I0130 20:40:12.238473   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.238485   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.245407   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.245432   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.245653   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.245670   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.287632   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.287660   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.287973   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.287998   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.288000   44923 main.go:141] libmachine: (no-preload-473743) DBG | Closing plugin on server side
	I0130 20:40:12.288014   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.288024   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.288266   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.288286   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.288297   44923 addons.go:470] Verifying addon metrics-server=true in "no-preload-473743"
	I0130 20:40:12.288352   44923 main.go:141] libmachine: (no-preload-473743) DBG | Closing plugin on server side
	I0130 20:40:12.290298   44923 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 20:40:09.291762   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:11.791994   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:12.016265   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:14.515097   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:12.291867   44923 addons.go:505] enable addons completed in 1.334521495s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 20:40:13.200767   44923 node_ready.go:58] node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:15.699345   44923 node_ready.go:58] node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:14.291583   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:16.292248   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:17.014332   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:19.014556   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:18.198624   44923 node_ready.go:58] node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:18.699015   44923 node_ready.go:49] node "no-preload-473743" has status "Ready":"True"
	I0130 20:40:18.699041   44923 node_ready.go:38] duration metric: took 7.504770144s waiting for node "no-preload-473743" to be "Ready" ...
	I0130 20:40:18.699050   44923 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:18.709647   44923 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.718022   44923 pod_ready.go:92] pod "coredns-76f75df574-d4c7t" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:18.718046   44923 pod_ready.go:81] duration metric: took 8.370541ms waiting for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.718054   44923 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.722992   44923 pod_ready.go:92] pod "etcd-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:18.723012   44923 pod_ready.go:81] duration metric: took 4.951762ms waiting for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.723020   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:20.732288   44923 pod_ready.go:102] pod "kube-apiserver-no-preload-473743" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:18.791445   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:21.290205   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:23.123817   45819 kubeadm.go:787] kubelet initialised
	I0130 20:40:23.123842   45819 kubeadm.go:788] duration metric: took 46.643164333s waiting for restarted kubelet to initialise ...
	I0130 20:40:23.123849   45819 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:23.128282   45819 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-547k4" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.132665   45819 pod_ready.go:92] pod "coredns-5644d7b6d9-547k4" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.132688   45819 pod_ready.go:81] duration metric: took 4.375362ms waiting for pod "coredns-5644d7b6d9-547k4" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.132701   45819 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-dtfzh" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.137072   45819 pod_ready.go:92] pod "coredns-5644d7b6d9-dtfzh" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.137092   45819 pod_ready.go:81] duration metric: took 4.379467ms waiting for pod "coredns-5644d7b6d9-dtfzh" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.137102   45819 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.142038   45819 pod_ready.go:92] pod "etcd-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.142058   45819 pod_ready.go:81] duration metric: took 4.949104ms waiting for pod "etcd-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.142070   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.146657   45819 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.146676   45819 pod_ready.go:81] duration metric: took 4.598238ms waiting for pod "kube-apiserver-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.146686   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.518159   45819 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.518189   45819 pod_ready.go:81] duration metric: took 371.488133ms waiting for pod "kube-controller-manager-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.518203   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ncl7z" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.919594   45819 pod_ready.go:92] pod "kube-proxy-ncl7z" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.919628   45819 pod_ready.go:81] duration metric: took 401.417322ms waiting for pod "kube-proxy-ncl7z" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.919644   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:24.318125   45819 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:24.318152   45819 pod_ready.go:81] duration metric: took 398.499457ms waiting for pod "kube-scheduler-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:24.318166   45819 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.513600   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:23.514060   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:21.233466   44923 pod_ready.go:92] pod "kube-apiserver-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:21.233494   44923 pod_ready.go:81] duration metric: took 2.510466903s waiting for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.233507   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.240688   44923 pod_ready.go:92] pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:21.240709   44923 pod_ready.go:81] duration metric: took 7.194165ms waiting for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.240721   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.248250   44923 pod_ready.go:92] pod "kube-proxy-zklzt" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:21.248271   44923 pod_ready.go:81] duration metric: took 7.542304ms waiting for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.248278   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.256673   44923 pod_ready.go:92] pod "kube-scheduler-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.256700   44923 pod_ready.go:81] duration metric: took 2.008414366s waiting for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.256712   44923 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:25.263480   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:23.790334   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:26.290232   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:28.292270   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:26.324649   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:28.825120   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:26.016305   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:28.513650   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:27.264434   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:29.764240   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:30.793210   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:33.292255   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:31.326850   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:33.824698   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:30.514448   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:32.518435   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:35.013676   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:32.264144   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:34.763689   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:35.789964   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:37.791095   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:35.825018   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:38.326094   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:37.014222   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:39.517868   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:37.265137   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:39.764115   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:40.290332   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.290850   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:40.327135   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.824370   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.014917   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.516872   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.264387   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.265504   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.291131   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:46.790512   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.827108   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:47.327816   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:46.518922   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:49.014136   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:46.765151   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:49.265178   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:48.790952   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:51.291730   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:49.824442   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:52.325401   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:51.014513   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:53.518388   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:51.266567   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:53.764501   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:53.789915   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:55.790332   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:57.791445   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:54.825612   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:57.324364   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:59.327308   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:56.020804   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:58.515544   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:56.263707   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:58.264200   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:00.264261   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:59.792066   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:02.289879   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:01.824631   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:03.824749   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:01.014649   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:03.014805   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:05.017318   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:02.763825   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:04.764040   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:04.290927   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:06.791853   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:06.326570   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:08.824889   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:07.516190   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:10.018532   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:06.765257   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:09.263466   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:09.290744   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:11.791416   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:10.825025   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:13.324947   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:12.514850   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:14.522700   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:11.263911   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:13.763429   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:15.766371   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:14.289786   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:16.291753   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:15.325297   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:17.824762   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:17.014087   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:19.518139   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:18.263727   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:20.263854   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:18.791517   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:21.292155   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:19.825751   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:22.324733   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:21.518205   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:24.015562   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:22.767815   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:25.263283   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:23.790847   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:26.290464   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:24.824063   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:26.825938   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:29.325683   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:26.016724   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:28.514670   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:27.264429   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:29.264577   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:28.791861   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:31.291558   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:31.824367   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.824771   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:30.515432   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.014091   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:31.265902   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.764211   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:35.764788   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.791968   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:36.290991   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:38.291383   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:35.824891   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:37.825500   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:35.514120   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:37.514579   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:39.516165   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:37.765006   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:40.263816   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:40.791224   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.792487   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:40.326148   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.825282   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.014531   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:44.514337   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.264845   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:44.764275   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:45.290370   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:47.790557   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:45.325184   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:47.825091   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:46.515035   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:49.013829   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:47.263752   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:49.263882   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:49.790715   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:52.291348   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:50.326963   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:52.825278   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:51.014381   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:53.016755   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:51.264167   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:53.264888   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:55.265000   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:54.291846   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:56.790351   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:55.325156   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:57.325446   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:59.326114   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:55.515866   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:58.013768   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:00.014052   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:57.763548   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:59.764374   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:58.790584   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:01.294420   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:01.827046   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:04.325425   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:02.514100   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:04.516981   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:02.264420   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:04.264851   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:03.790918   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:06.290560   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:08.291334   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:06.824232   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:08.824527   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:07.014375   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:09.513980   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:06.764222   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:09.264299   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:10.292477   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:12.795626   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:10.825706   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:13.325572   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:11.514369   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:14.016090   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:11.264881   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:13.763625   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:15.764616   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:15.290292   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:17.790263   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:15.326185   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:17.826504   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:16.518263   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:19.014219   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:18.265723   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:20.764663   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:19.792068   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:22.292221   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:20.325069   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:22.326307   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:21.014811   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:23.014876   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:25.017016   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:23.264098   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:25.267065   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:24.791616   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:27.291739   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:24.825416   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:26.826380   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:29.325717   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:27.513692   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:30.015246   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:27.763938   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:29.764135   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:29.789997   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:31.790272   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:31.825466   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:33.826959   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:32.513718   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:35.014948   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:31.780185   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:34.265062   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:33.790477   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:36.290139   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:38.291801   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:36.325475   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:38.825210   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:37.513778   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:39.518155   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:36.764137   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:38.765005   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:40.790050   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:42.791739   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:41.325239   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:43.826300   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:42.013844   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:44.014396   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:41.268687   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:43.765101   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:45.290120   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:47.291365   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:46.325321   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:48.824944   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:46.015721   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:48.514689   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:46.269498   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:48.763780   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:50.765289   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:49.790212   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:52.291090   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:51.324622   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:53.324873   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:51.015934   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:53.016171   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:52.765777   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:55.264419   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:54.292666   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:56.790098   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:55.825230   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:58.324546   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:55.514240   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:58.014796   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:57.764094   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:59.764594   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:58.790445   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:00.790844   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:03.290632   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:00.325916   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:02.824174   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:00.514203   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:02.515317   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:05.018840   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:01.767672   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:04.263736   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:04.290221   45037 pod_ready.go:81] duration metric: took 4m0.006974938s waiting for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	E0130 20:43:04.290244   45037 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 20:43:04.290252   45037 pod_ready.go:38] duration metric: took 4m4.550384705s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:43:04.290265   45037 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:43:04.290289   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:43:04.290330   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:43:04.354567   45037 cri.go:89] found id: "f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:04.354594   45037 cri.go:89] found id: ""
	I0130 20:43:04.354603   45037 logs.go:276] 1 containers: [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d]
	I0130 20:43:04.354664   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.359890   45037 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:43:04.359961   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:43:04.399415   45037 cri.go:89] found id: "0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:04.399437   45037 cri.go:89] found id: ""
	I0130 20:43:04.399444   45037 logs.go:276] 1 containers: [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18]
	I0130 20:43:04.399484   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.404186   45037 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:43:04.404241   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:43:04.445968   45037 cri.go:89] found id: "4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:04.445994   45037 cri.go:89] found id: ""
	I0130 20:43:04.446003   45037 logs.go:276] 1 containers: [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d]
	I0130 20:43:04.446060   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.450215   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:43:04.450285   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:43:04.492438   45037 cri.go:89] found id: "74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:04.492462   45037 cri.go:89] found id: ""
	I0130 20:43:04.492476   45037 logs.go:276] 1 containers: [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f]
	I0130 20:43:04.492537   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.497227   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:43:04.497301   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:43:04.535936   45037 cri.go:89] found id: "cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:04.535960   45037 cri.go:89] found id: ""
	I0130 20:43:04.535970   45037 logs.go:276] 1 containers: [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254]
	I0130 20:43:04.536026   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.540968   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:43:04.541046   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:43:04.584192   45037 cri.go:89] found id: "b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:04.584214   45037 cri.go:89] found id: ""
	I0130 20:43:04.584222   45037 logs.go:276] 1 containers: [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2]
	I0130 20:43:04.584280   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.588842   45037 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:43:04.588914   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:43:04.630957   45037 cri.go:89] found id: ""
	I0130 20:43:04.630984   45037 logs.go:276] 0 containers: []
	W0130 20:43:04.630994   45037 logs.go:278] No container was found matching "kindnet"
	I0130 20:43:04.631000   45037 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:43:04.631057   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:43:04.672712   45037 cri.go:89] found id: "84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:04.672741   45037 cri.go:89] found id: "5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:04.672747   45037 cri.go:89] found id: ""
	I0130 20:43:04.672757   45037 logs.go:276] 2 containers: [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5]
	I0130 20:43:04.672830   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.677537   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.681806   45037 logs.go:123] Gathering logs for kube-scheduler [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f] ...
	I0130 20:43:04.681833   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:04.743389   45037 logs.go:123] Gathering logs for kube-proxy [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254] ...
	I0130 20:43:04.743420   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:04.783857   45037 logs.go:123] Gathering logs for etcd [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18] ...
	I0130 20:43:04.783891   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:04.838800   45037 logs.go:123] Gathering logs for container status ...
	I0130 20:43:04.838827   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:43:04.897337   45037 logs.go:123] Gathering logs for kubelet ...
	I0130 20:43:04.897361   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:43:04.954337   45037 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:43:04.954367   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:43:05.110447   45037 logs.go:123] Gathering logs for kube-controller-manager [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2] ...
	I0130 20:43:05.110476   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:05.169238   45037 logs.go:123] Gathering logs for storage-provisioner [5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5] ...
	I0130 20:43:05.169275   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:05.209860   45037 logs.go:123] Gathering logs for dmesg ...
	I0130 20:43:05.209890   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:43:05.224272   45037 logs.go:123] Gathering logs for coredns [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d] ...
	I0130 20:43:05.224296   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:05.264818   45037 logs.go:123] Gathering logs for storage-provisioner [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac] ...
	I0130 20:43:05.264857   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:05.304626   45037 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:43:05.304657   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:43:05.748336   45037 logs.go:123] Gathering logs for kube-apiserver [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d] ...
	I0130 20:43:05.748377   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:08.306639   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:43:08.324001   45037 api_server.go:72] duration metric: took 4m16.400279002s to wait for apiserver process to appear ...
	I0130 20:43:08.324028   45037 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:43:08.324061   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:43:08.324111   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:43:08.364000   45037 cri.go:89] found id: "f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:08.364026   45037 cri.go:89] found id: ""
	I0130 20:43:08.364036   45037 logs.go:276] 1 containers: [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d]
	I0130 20:43:08.364093   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.368770   45037 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:43:08.368843   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:43:08.411371   45037 cri.go:89] found id: "0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:08.411394   45037 cri.go:89] found id: ""
	I0130 20:43:08.411404   45037 logs.go:276] 1 containers: [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18]
	I0130 20:43:08.411462   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.415582   45037 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:43:08.415648   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:43:08.455571   45037 cri.go:89] found id: "4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:08.455601   45037 cri.go:89] found id: ""
	I0130 20:43:08.455612   45037 logs.go:276] 1 containers: [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d]
	I0130 20:43:08.455662   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.459908   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:43:08.459972   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:43:08.497350   45037 cri.go:89] found id: "74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:08.497374   45037 cri.go:89] found id: ""
	I0130 20:43:08.497383   45037 logs.go:276] 1 containers: [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f]
	I0130 20:43:08.497441   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.501504   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:43:08.501552   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:43:08.550031   45037 cri.go:89] found id: "cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:08.550057   45037 cri.go:89] found id: ""
	I0130 20:43:08.550066   45037 logs.go:276] 1 containers: [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254]
	I0130 20:43:08.550181   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.555166   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:43:08.555215   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:43:08.590903   45037 cri.go:89] found id: "b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:08.590929   45037 cri.go:89] found id: ""
	I0130 20:43:08.590939   45037 logs.go:276] 1 containers: [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2]
	I0130 20:43:08.590997   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.594837   45037 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:43:08.594888   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:43:08.630989   45037 cri.go:89] found id: ""
	I0130 20:43:08.631015   45037 logs.go:276] 0 containers: []
	W0130 20:43:08.631024   45037 logs.go:278] No container was found matching "kindnet"
	I0130 20:43:08.631029   45037 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:43:08.631072   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:43:08.669579   45037 cri.go:89] found id: "84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:08.669603   45037 cri.go:89] found id: "5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:08.669609   45037 cri.go:89] found id: ""
	I0130 20:43:08.669617   45037 logs.go:276] 2 containers: [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5]
	I0130 20:43:08.669666   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.673938   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.677733   45037 logs.go:123] Gathering logs for kubelet ...
	I0130 20:43:08.677757   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:43:08.726492   45037 logs.go:123] Gathering logs for dmesg ...
	I0130 20:43:08.726519   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:43:04.825623   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:07.331997   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:07.514074   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:09.514484   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:06.264040   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:08.264505   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:10.764072   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:08.740624   45037 logs.go:123] Gathering logs for kube-controller-manager [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2] ...
	I0130 20:43:08.740645   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:08.792517   45037 logs.go:123] Gathering logs for kube-scheduler [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f] ...
	I0130 20:43:08.792547   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:08.829131   45037 logs.go:123] Gathering logs for storage-provisioner [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac] ...
	I0130 20:43:08.829166   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:08.870777   45037 logs.go:123] Gathering logs for storage-provisioner [5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5] ...
	I0130 20:43:08.870802   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:08.909648   45037 logs.go:123] Gathering logs for kube-apiserver [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d] ...
	I0130 20:43:08.909678   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:08.953671   45037 logs.go:123] Gathering logs for coredns [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d] ...
	I0130 20:43:08.953701   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:08.989624   45037 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:43:08.989648   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:43:09.383141   45037 logs.go:123] Gathering logs for container status ...
	I0130 20:43:09.383174   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:43:09.442685   45037 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:43:09.442719   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:43:09.563370   45037 logs.go:123] Gathering logs for etcd [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18] ...
	I0130 20:43:09.563398   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:09.614390   45037 logs.go:123] Gathering logs for kube-proxy [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254] ...
	I0130 20:43:09.614422   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:12.156906   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:43:12.161980   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 200:
	ok
	I0130 20:43:12.163284   45037 api_server.go:141] control plane version: v1.28.4
	I0130 20:43:12.163308   45037 api_server.go:131] duration metric: took 3.839271753s to wait for apiserver health ...
	I0130 20:43:12.163318   45037 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:43:12.163343   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:43:12.163389   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:43:12.207351   45037 cri.go:89] found id: "f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:12.207372   45037 cri.go:89] found id: ""
	I0130 20:43:12.207381   45037 logs.go:276] 1 containers: [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d]
	I0130 20:43:12.207436   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.213923   45037 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:43:12.213987   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:43:12.263647   45037 cri.go:89] found id: "0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:12.263680   45037 cri.go:89] found id: ""
	I0130 20:43:12.263690   45037 logs.go:276] 1 containers: [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18]
	I0130 20:43:12.263743   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.268327   45037 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:43:12.268381   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:43:12.310594   45037 cri.go:89] found id: "4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:12.310614   45037 cri.go:89] found id: ""
	I0130 20:43:12.310622   45037 logs.go:276] 1 containers: [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d]
	I0130 20:43:12.310670   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.315134   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:43:12.315185   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:43:12.359384   45037 cri.go:89] found id: "74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:12.359404   45037 cri.go:89] found id: ""
	I0130 20:43:12.359412   45037 logs.go:276] 1 containers: [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f]
	I0130 20:43:12.359468   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.363796   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:43:12.363856   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:43:12.399741   45037 cri.go:89] found id: "cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:12.399771   45037 cri.go:89] found id: ""
	I0130 20:43:12.399783   45037 logs.go:276] 1 containers: [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254]
	I0130 20:43:12.399844   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.404237   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:43:12.404302   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:43:12.457772   45037 cri.go:89] found id: "b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:12.457806   45037 cri.go:89] found id: ""
	I0130 20:43:12.457816   45037 logs.go:276] 1 containers: [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2]
	I0130 20:43:12.457876   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.462316   45037 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:43:12.462378   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:43:12.499660   45037 cri.go:89] found id: ""
	I0130 20:43:12.499690   45037 logs.go:276] 0 containers: []
	W0130 20:43:12.499699   45037 logs.go:278] No container was found matching "kindnet"
	I0130 20:43:12.499707   45037 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:43:12.499763   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:43:12.548931   45037 cri.go:89] found id: "84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:12.548961   45037 cri.go:89] found id: "5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:12.548969   45037 cri.go:89] found id: ""
	I0130 20:43:12.548978   45037 logs.go:276] 2 containers: [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5]
	I0130 20:43:12.549037   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.552983   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.557322   45037 logs.go:123] Gathering logs for container status ...
	I0130 20:43:12.557340   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:43:12.599784   45037 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:43:12.599812   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:43:12.716124   45037 logs.go:123] Gathering logs for kube-apiserver [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d] ...
	I0130 20:43:12.716156   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:12.766940   45037 logs.go:123] Gathering logs for storage-provisioner [5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5] ...
	I0130 20:43:12.766980   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:12.804026   45037 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:43:12.804059   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:43:13.165109   45037 logs.go:123] Gathering logs for coredns [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d] ...
	I0130 20:43:13.165153   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:13.204652   45037 logs.go:123] Gathering logs for kube-scheduler [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f] ...
	I0130 20:43:13.204679   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:13.242644   45037 logs.go:123] Gathering logs for storage-provisioner [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac] ...
	I0130 20:43:13.242675   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:13.282527   45037 logs.go:123] Gathering logs for kubelet ...
	I0130 20:43:13.282558   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:43:13.335128   45037 logs.go:123] Gathering logs for etcd [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18] ...
	I0130 20:43:13.335156   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:13.385564   45037 logs.go:123] Gathering logs for kube-controller-manager [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2] ...
	I0130 20:43:13.385599   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:13.449564   45037 logs.go:123] Gathering logs for dmesg ...
	I0130 20:43:13.449603   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:43:13.464376   45037 logs.go:123] Gathering logs for kube-proxy [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254] ...
	I0130 20:43:13.464406   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:09.825882   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:11.827628   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:14.325309   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:12.012894   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:14.014496   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:12.765167   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:14.765356   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:16.017083   45037 system_pods.go:59] 8 kube-system pods found
	I0130 20:43:16.017121   45037 system_pods.go:61] "coredns-5dd5756b68-jqzzv" [59f362b6-606e-4bcd-b5eb-c8822aaf8b9c] Running
	I0130 20:43:16.017128   45037 system_pods.go:61] "etcd-embed-certs-208583" [798094bf-2aac-4f39-afc1-4f873bdd08ee] Running
	I0130 20:43:16.017135   45037 system_pods.go:61] "kube-apiserver-embed-certs-208583" [b96b9f6e-b36a-47bf-8f6d-01f883501766] Running
	I0130 20:43:16.017141   45037 system_pods.go:61] "kube-controller-manager-embed-certs-208583" [3dbd9e29-5c95-40f5-acd8-9767f6ce7a03] Running
	I0130 20:43:16.017148   45037 system_pods.go:61] "kube-proxy-g7q5t" [47f109e0-7a56-472f-8c7e-ba2b138de352] Running
	I0130 20:43:16.017154   45037 system_pods.go:61] "kube-scheduler-embed-certs-208583" [e8a37eb1-599f-478f-bbc1-b44b1020f291] Running
	I0130 20:43:16.017165   45037 system_pods.go:61] "metrics-server-57f55c9bc5-ghg9n" [37700115-83e9-440a-b396-56f50adb6311] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:43:16.017172   45037 system_pods.go:61] "storage-provisioner" [15108916-a630-4208-99f7-5706db407b22] Running
	I0130 20:43:16.017185   45037 system_pods.go:74] duration metric: took 3.853859786s to wait for pod list to return data ...
	I0130 20:43:16.017198   45037 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:43:16.019949   45037 default_sa.go:45] found service account: "default"
	I0130 20:43:16.019967   45037 default_sa.go:55] duration metric: took 2.760881ms for default service account to be created ...
	I0130 20:43:16.019976   45037 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:43:16.025198   45037 system_pods.go:86] 8 kube-system pods found
	I0130 20:43:16.025219   45037 system_pods.go:89] "coredns-5dd5756b68-jqzzv" [59f362b6-606e-4bcd-b5eb-c8822aaf8b9c] Running
	I0130 20:43:16.025225   45037 system_pods.go:89] "etcd-embed-certs-208583" [798094bf-2aac-4f39-afc1-4f873bdd08ee] Running
	I0130 20:43:16.025229   45037 system_pods.go:89] "kube-apiserver-embed-certs-208583" [b96b9f6e-b36a-47bf-8f6d-01f883501766] Running
	I0130 20:43:16.025234   45037 system_pods.go:89] "kube-controller-manager-embed-certs-208583" [3dbd9e29-5c95-40f5-acd8-9767f6ce7a03] Running
	I0130 20:43:16.025238   45037 system_pods.go:89] "kube-proxy-g7q5t" [47f109e0-7a56-472f-8c7e-ba2b138de352] Running
	I0130 20:43:16.025242   45037 system_pods.go:89] "kube-scheduler-embed-certs-208583" [e8a37eb1-599f-478f-bbc1-b44b1020f291] Running
	I0130 20:43:16.025248   45037 system_pods.go:89] "metrics-server-57f55c9bc5-ghg9n" [37700115-83e9-440a-b396-56f50adb6311] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:43:16.025258   45037 system_pods.go:89] "storage-provisioner" [15108916-a630-4208-99f7-5706db407b22] Running
	I0130 20:43:16.025264   45037 system_pods.go:126] duration metric: took 5.282813ms to wait for k8s-apps to be running ...
	I0130 20:43:16.025270   45037 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:43:16.025309   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:43:16.043415   45037 system_svc.go:56] duration metric: took 18.134458ms WaitForService to wait for kubelet.
	I0130 20:43:16.043443   45037 kubeadm.go:581] duration metric: took 4m24.119724167s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:43:16.043472   45037 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:43:16.046999   45037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:43:16.047021   45037 node_conditions.go:123] node cpu capacity is 2
	I0130 20:43:16.047035   45037 node_conditions.go:105] duration metric: took 3.556321ms to run NodePressure ...
	I0130 20:43:16.047048   45037 start.go:228] waiting for startup goroutines ...
	I0130 20:43:16.047061   45037 start.go:233] waiting for cluster config update ...
	I0130 20:43:16.047078   45037 start.go:242] writing updated cluster config ...
	I0130 20:43:16.047368   45037 ssh_runner.go:195] Run: rm -f paused
	I0130 20:43:16.098760   45037 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 20:43:16.100739   45037 out.go:177] * Done! kubectl is now configured to use "embed-certs-208583" cluster and "default" namespace by default
	I0130 20:43:16.326450   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:18.824456   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:16.514335   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:19.014528   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:17.264059   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:19.264543   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:20.824649   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:23.324731   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:21.014634   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:23.513609   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:21.763771   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:23.764216   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:25.325575   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:27.825708   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:25.514335   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:27.506991   45441 pod_ready.go:81] duration metric: took 4m0.000368672s waiting for pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace to be "Ready" ...
	E0130 20:43:27.507020   45441 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 20:43:27.507037   45441 pod_ready.go:38] duration metric: took 4m11.059827725s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:43:27.507060   45441 kubeadm.go:640] restartCluster took 4m33.680532974s
	W0130 20:43:27.507128   45441 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 20:43:27.507159   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 20:43:26.264077   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:28.264502   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:30.764952   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:30.325157   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:32.325570   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:32.766530   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:35.264541   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:34.825545   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:36.825757   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:38.825922   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:37.764613   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:39.772391   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:41.253066   45441 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.745883202s)
	I0130 20:43:41.253138   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:43:41.267139   45441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:43:41.276814   45441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:43:41.286633   45441 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:43:41.286678   45441 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 20:43:41.340190   45441 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0130 20:43:41.340255   45441 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 20:43:41.491373   45441 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 20:43:41.491524   45441 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 20:43:41.491644   45441 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 20:43:41.735659   45441 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 20:43:41.737663   45441 out.go:204]   - Generating certificates and keys ...
	I0130 20:43:41.737778   45441 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 20:43:41.737875   45441 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 20:43:41.737961   45441 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 20:43:41.738034   45441 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 20:43:41.738116   45441 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 20:43:41.738215   45441 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 20:43:41.738295   45441 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 20:43:41.738381   45441 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 20:43:41.738481   45441 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 20:43:41.738542   45441 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 20:43:41.738578   45441 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 20:43:41.738633   45441 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 20:43:41.894828   45441 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 20:43:42.122408   45441 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 20:43:42.406185   45441 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 20:43:42.526794   45441 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 20:43:42.527473   45441 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 20:43:42.529906   45441 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 20:43:40.826403   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:43.324650   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:42.531956   45441 out.go:204]   - Booting up control plane ...
	I0130 20:43:42.532077   45441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 20:43:42.532175   45441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 20:43:42.532276   45441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 20:43:42.550440   45441 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 20:43:42.551432   45441 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 20:43:42.551515   45441 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 20:43:42.666449   45441 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 20:43:42.265430   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:44.268768   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:45.325121   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:47.325585   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:46.768728   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:49.264313   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:50.670814   45441 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004172 seconds
	I0130 20:43:50.670940   45441 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 20:43:50.693878   45441 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 20:43:51.228257   45441 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 20:43:51.228498   45441 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-877742 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 20:43:51.743336   45441 kubeadm.go:322] [bootstrap-token] Using token: hhyk9t.fiwckj4dbaljm18s
	I0130 20:43:51.744898   45441 out.go:204]   - Configuring RBAC rules ...
	I0130 20:43:51.744996   45441 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 20:43:51.755911   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 20:43:51.769124   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 20:43:51.773192   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 20:43:51.776643   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 20:43:51.780261   45441 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 20:43:51.807541   45441 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 20:43:52.070376   45441 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 20:43:52.167958   45441 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 20:43:52.167994   45441 kubeadm.go:322] 
	I0130 20:43:52.168050   45441 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 20:43:52.168061   45441 kubeadm.go:322] 
	I0130 20:43:52.168142   45441 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 20:43:52.168157   45441 kubeadm.go:322] 
	I0130 20:43:52.168193   45441 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 20:43:52.168254   45441 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 20:43:52.168325   45441 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 20:43:52.168336   45441 kubeadm.go:322] 
	I0130 20:43:52.168399   45441 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 20:43:52.168409   45441 kubeadm.go:322] 
	I0130 20:43:52.168469   45441 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 20:43:52.168480   45441 kubeadm.go:322] 
	I0130 20:43:52.168546   45441 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 20:43:52.168639   45441 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 20:43:52.168731   45441 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 20:43:52.168741   45441 kubeadm.go:322] 
	I0130 20:43:52.168834   45441 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 20:43:52.168928   45441 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 20:43:52.168938   45441 kubeadm.go:322] 
	I0130 20:43:52.169033   45441 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token hhyk9t.fiwckj4dbaljm18s \
	I0130 20:43:52.169145   45441 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 \
	I0130 20:43:52.169175   45441 kubeadm.go:322] 	--control-plane 
	I0130 20:43:52.169185   45441 kubeadm.go:322] 
	I0130 20:43:52.169274   45441 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 20:43:52.169283   45441 kubeadm.go:322] 
	I0130 20:43:52.169374   45441 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token hhyk9t.fiwckj4dbaljm18s \
	I0130 20:43:52.169485   45441 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 
	I0130 20:43:52.170103   45441 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 20:43:52.170128   45441 cni.go:84] Creating CNI manager for ""
	I0130 20:43:52.170138   45441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:43:52.171736   45441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:43:49.827004   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:51.828301   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:54.324951   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:52.173096   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:43:52.207763   45441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:43:52.239391   45441 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:43:52.239528   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:52.239550   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218 minikube.k8s.io/name=default-k8s-diff-port-877742 minikube.k8s.io/updated_at=2024_01_30T20_43_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:52.359837   45441 ops.go:34] apiserver oom_adj: -16
	I0130 20:43:52.622616   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:53.123165   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:53.622655   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:54.122819   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:54.623579   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:55.122784   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:51.265017   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:53.765449   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:56.826059   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:59.324992   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:55.622980   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:56.123436   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:56.623691   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:57.122685   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:57.623150   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:58.123358   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:58.623234   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:59.122804   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:59.623408   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:00.122730   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:56.264593   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:58.764827   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:00.765740   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:01.325185   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:03.325582   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:00.622649   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:01.123007   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:01.623488   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:02.123117   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:02.623186   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:03.122987   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:03.623625   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:04.123576   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:04.623493   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:05.123073   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:05.292330   45441 kubeadm.go:1088] duration metric: took 13.052870929s to wait for elevateKubeSystemPrivileges.
	I0130 20:44:05.292359   45441 kubeadm.go:406] StartCluster complete in 5m11.519002976s
	I0130 20:44:05.292376   45441 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:05.292446   45441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:44:05.294511   45441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:05.296490   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:44:05.296705   45441 config.go:182] Loaded profile config "default-k8s-diff-port-877742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:44:05.296739   45441 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:44:05.296797   45441 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-877742"
	I0130 20:44:05.296814   45441 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-877742"
	W0130 20:44:05.296823   45441 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:44:05.296872   45441 host.go:66] Checking if "default-k8s-diff-port-877742" exists ...
	I0130 20:44:05.297028   45441 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-877742"
	I0130 20:44:05.297068   45441 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-877742"
	I0130 20:44:05.297257   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.297282   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.297449   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.297476   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.297476   45441 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-877742"
	I0130 20:44:05.297498   45441 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-877742"
	W0130 20:44:05.297512   45441 addons.go:243] addon metrics-server should already be in state true
	I0130 20:44:05.297557   45441 host.go:66] Checking if "default-k8s-diff-port-877742" exists ...
	I0130 20:44:05.297934   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.297972   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.314618   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0130 20:44:05.314913   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34557
	I0130 20:44:05.315139   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.315638   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.315718   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.315751   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.316139   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.316295   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.316318   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.316342   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39221
	I0130 20:44:05.316649   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.316695   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.316729   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.316842   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.317131   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.317573   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.317598   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.317967   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.318507   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.318539   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.321078   45441 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-877742"
	W0130 20:44:05.321104   45441 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:44:05.321129   45441 host.go:66] Checking if "default-k8s-diff-port-877742" exists ...
	I0130 20:44:05.321503   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.321530   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.338144   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33785
	I0130 20:44:05.338180   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I0130 20:44:05.338717   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.338798   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.339318   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.339325   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.339343   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.339345   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.339804   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.339819   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.339987   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.340017   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.340889   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33925
	I0130 20:44:05.341348   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.341847   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.341870   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.342243   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:44:05.342328   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:44:05.344137   45441 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:44:05.342641   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.344745   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.345833   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:44:05.345871   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:44:05.345889   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:44:05.345936   45441 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:44:05.347567   45441 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:05.347585   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:44:05.347602   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:44:05.346048   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.348959   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.349635   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:44:05.349686   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.349853   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:44:05.350119   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:44:05.350404   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:44:05.350619   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:44:05.351435   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.351548   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:44:05.351565   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.351753   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:44:05.351924   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:44:05.352094   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:44:05.352237   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:44:05.366786   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40645
	I0130 20:44:05.367211   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.367744   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.367768   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.368174   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.368435   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.370411   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:44:05.370688   45441 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:05.370707   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:44:05.370726   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:44:05.375681   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:44:05.375726   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.375758   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:44:05.375778   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.375938   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:44:05.376136   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:44:05.376324   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:44:03.263112   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:05.264610   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:05.536173   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 20:44:05.547763   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:44:05.547783   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:44:05.561439   45441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:05.589801   45441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:05.619036   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:44:05.619063   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:44:05.672972   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:05.672993   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:44:05.753214   45441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:05.861799   45441 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-877742" context rescaled to 1 replicas
	I0130 20:44:05.861852   45441 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.52 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:44:05.863602   45441 out.go:177] * Verifying Kubernetes components...
	I0130 20:44:05.864716   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:07.418910   45441 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.882691784s)
	I0130 20:44:07.418945   45441 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0130 20:44:07.960063   45441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.370223433s)
	I0130 20:44:07.960120   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.960161   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.960158   45441 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.095417539s)
	I0130 20:44:07.960143   45441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.206889959s)
	I0130 20:44:07.960223   45441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.398756648s)
	I0130 20:44:07.960234   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.960247   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.960190   45441 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-877742" to be "Ready" ...
	I0130 20:44:07.960251   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.960319   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.961892   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.961892   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.961902   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.961919   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.961921   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.961902   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.961934   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.961936   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.961941   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.961944   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.961950   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.961955   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.961970   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.961980   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.961990   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.962309   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.962319   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.962340   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.962348   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.962350   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.962357   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.962380   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.962380   45441 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-877742"
	I0130 20:44:07.962420   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.962439   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.979672   45441 node_ready.go:49] node "default-k8s-diff-port-877742" has status "Ready":"True"
	I0130 20:44:07.979700   45441 node_ready.go:38] duration metric: took 19.437813ms waiting for node "default-k8s-diff-port-877742" to be "Ready" ...
	I0130 20:44:07.979713   45441 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:08.005989   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:08.006020   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:08.006266   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:08.006287   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:08.006286   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:08.008091   45441 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0130 20:44:05.329467   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:07.826212   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:08.009918   45441 addons.go:505] enable addons completed in 2.713172208s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0130 20:44:08.032478   45441 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tlb8h" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.539497   45441 pod_ready.go:92] pod "coredns-5dd5756b68-tlb8h" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.539527   45441 pod_ready.go:81] duration metric: took 1.50701275s waiting for pod "coredns-5dd5756b68-tlb8h" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.539537   45441 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.545068   45441 pod_ready.go:92] pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.545090   45441 pod_ready.go:81] duration metric: took 5.546681ms waiting for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.545099   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.550794   45441 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.550817   45441 pod_ready.go:81] duration metric: took 5.711144ms waiting for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.550829   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.556050   45441 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.556068   45441 pod_ready.go:81] duration metric: took 5.232882ms waiting for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.556076   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-59zvd" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.562849   45441 pod_ready.go:92] pod "kube-proxy-59zvd" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.562866   45441 pod_ready.go:81] duration metric: took 6.784197ms waiting for pod "kube-proxy-59zvd" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.562874   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.965815   45441 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.965846   45441 pod_ready.go:81] duration metric: took 402.96387ms waiting for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.965860   45441 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:07.265985   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:09.765494   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:10.326063   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:12.825921   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:11.974724   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:14.473879   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:12.265674   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:14.765546   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:15.325945   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:17.326041   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:16.974143   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:19.473552   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:16.765691   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:18.766995   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:19.824366   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:21.824919   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:24.318779   45819 pod_ready.go:81] duration metric: took 4m0.000598437s waiting for pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace to be "Ready" ...
	E0130 20:44:24.318808   45819 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 20:44:24.318829   45819 pod_ready.go:38] duration metric: took 4m1.194970045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:24.318872   45819 kubeadm.go:640] restartCluster took 5m9.285235807s
	W0130 20:44:24.318943   45819 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 20:44:24.318974   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 20:44:21.973193   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:23.974160   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:21.263429   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:23.263586   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:23.263609   44923 pod_ready.go:81] duration metric: took 4m0.006890289s waiting for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	E0130 20:44:23.263618   44923 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 20:44:23.263625   44923 pod_ready.go:38] duration metric: took 4m4.564565945s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:23.263637   44923 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:44:23.263671   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:44:23.263711   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:44:23.319983   44923 cri.go:89] found id: "ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:23.320013   44923 cri.go:89] found id: ""
	I0130 20:44:23.320023   44923 logs.go:276] 1 containers: [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e]
	I0130 20:44:23.320078   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.325174   44923 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:44:23.325239   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:44:23.375914   44923 cri.go:89] found id: "b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:23.375952   44923 cri.go:89] found id: ""
	I0130 20:44:23.375960   44923 logs.go:276] 1 containers: [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901]
	I0130 20:44:23.376003   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.380265   44923 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:44:23.380324   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:44:23.428507   44923 cri.go:89] found id: "3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:23.428534   44923 cri.go:89] found id: ""
	I0130 20:44:23.428544   44923 logs.go:276] 1 containers: [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c]
	I0130 20:44:23.428591   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.434113   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:44:23.434184   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:44:23.522888   44923 cri.go:89] found id: "39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:23.522915   44923 cri.go:89] found id: ""
	I0130 20:44:23.522922   44923 logs.go:276] 1 containers: [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79]
	I0130 20:44:23.522964   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.534952   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:44:23.535015   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:44:23.576102   44923 cri.go:89] found id: "880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:23.576129   44923 cri.go:89] found id: ""
	I0130 20:44:23.576138   44923 logs.go:276] 1 containers: [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689]
	I0130 20:44:23.576185   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.580463   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:44:23.580527   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:44:23.620990   44923 cri.go:89] found id: "10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:23.621011   44923 cri.go:89] found id: ""
	I0130 20:44:23.621018   44923 logs.go:276] 1 containers: [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f]
	I0130 20:44:23.621069   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.625706   44923 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:44:23.625762   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:44:23.666341   44923 cri.go:89] found id: ""
	I0130 20:44:23.666368   44923 logs.go:276] 0 containers: []
	W0130 20:44:23.666378   44923 logs.go:278] No container was found matching "kindnet"
	I0130 20:44:23.666384   44923 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:44:23.666441   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:44:23.707229   44923 cri.go:89] found id: "e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:23.707248   44923 cri.go:89] found id: "748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:23.707252   44923 cri.go:89] found id: ""
	I0130 20:44:23.707258   44923 logs.go:276] 2 containers: [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446]
	I0130 20:44:23.707314   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.711242   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.715859   44923 logs.go:123] Gathering logs for kube-apiserver [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e] ...
	I0130 20:44:23.715883   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:23.775696   44923 logs.go:123] Gathering logs for storage-provisioner [748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446] ...
	I0130 20:44:23.775722   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:23.817767   44923 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:44:23.817796   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:44:24.301934   44923 logs.go:123] Gathering logs for container status ...
	I0130 20:44:24.301969   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:44:24.361236   44923 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:44:24.361265   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:44:24.511849   44923 logs.go:123] Gathering logs for kube-controller-manager [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f] ...
	I0130 20:44:24.511886   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:24.573648   44923 logs.go:123] Gathering logs for etcd [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901] ...
	I0130 20:44:24.573683   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:24.620572   44923 logs.go:123] Gathering logs for kubelet ...
	I0130 20:44:24.620608   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:44:24.687312   44923 logs.go:123] Gathering logs for dmesg ...
	I0130 20:44:24.687346   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:44:24.702224   44923 logs.go:123] Gathering logs for kube-proxy [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689] ...
	I0130 20:44:24.702262   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:24.749188   44923 logs.go:123] Gathering logs for storage-provisioner [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0] ...
	I0130 20:44:24.749218   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:24.793069   44923 logs.go:123] Gathering logs for coredns [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c] ...
	I0130 20:44:24.793093   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:24.829705   44923 logs.go:123] Gathering logs for kube-scheduler [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79] ...
	I0130 20:44:24.829730   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:29.263901   45819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.944900372s)
	I0130 20:44:29.263978   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:29.277198   45819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:44:29.286661   45819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:44:29.297088   45819 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:44:29.297129   45819 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0130 20:44:29.360347   45819 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0130 20:44:29.360446   45819 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 20:44:29.516880   45819 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 20:44:29.517075   45819 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 20:44:29.517217   45819 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 20:44:29.756175   45819 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 20:44:29.756323   45819 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 20:44:29.764820   45819 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0130 20:44:29.907654   45819 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 20:44:26.473595   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:28.473808   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:29.909307   45819 out.go:204]   - Generating certificates and keys ...
	I0130 20:44:29.909397   45819 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 20:44:29.909484   45819 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 20:44:29.909578   45819 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 20:44:29.909674   45819 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 20:44:29.909784   45819 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 20:44:29.909866   45819 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 20:44:29.909974   45819 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 20:44:29.910057   45819 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 20:44:29.910163   45819 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 20:44:29.910266   45819 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 20:44:29.910316   45819 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 20:44:29.910409   45819 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 20:44:29.974805   45819 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 20:44:30.281258   45819 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 20:44:30.605015   45819 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 20:44:30.782125   45819 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 20:44:30.783329   45819 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 20:44:27.369691   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:44:27.393279   44923 api_server.go:72] duration metric: took 4m16.430750077s to wait for apiserver process to appear ...
	I0130 20:44:27.393306   44923 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:44:27.393355   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:44:27.393434   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:44:27.443366   44923 cri.go:89] found id: "ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:27.443390   44923 cri.go:89] found id: ""
	I0130 20:44:27.443400   44923 logs.go:276] 1 containers: [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e]
	I0130 20:44:27.443457   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.448963   44923 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:44:27.449021   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:44:27.502318   44923 cri.go:89] found id: "b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:27.502341   44923 cri.go:89] found id: ""
	I0130 20:44:27.502348   44923 logs.go:276] 1 containers: [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901]
	I0130 20:44:27.502398   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.507295   44923 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:44:27.507352   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:44:27.548224   44923 cri.go:89] found id: "3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:27.548247   44923 cri.go:89] found id: ""
	I0130 20:44:27.548255   44923 logs.go:276] 1 containers: [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c]
	I0130 20:44:27.548299   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.552806   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:44:27.552864   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:44:27.608403   44923 cri.go:89] found id: "39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:27.608434   44923 cri.go:89] found id: ""
	I0130 20:44:27.608444   44923 logs.go:276] 1 containers: [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79]
	I0130 20:44:27.608523   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.613370   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:44:27.613435   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:44:27.668380   44923 cri.go:89] found id: "880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:27.668406   44923 cri.go:89] found id: ""
	I0130 20:44:27.668417   44923 logs.go:276] 1 containers: [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689]
	I0130 20:44:27.668470   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.673171   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:44:27.673231   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:44:27.720444   44923 cri.go:89] found id: "10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:27.720473   44923 cri.go:89] found id: ""
	I0130 20:44:27.720483   44923 logs.go:276] 1 containers: [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f]
	I0130 20:44:27.720546   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.725007   44923 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:44:27.725062   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:44:27.772186   44923 cri.go:89] found id: ""
	I0130 20:44:27.772214   44923 logs.go:276] 0 containers: []
	W0130 20:44:27.772224   44923 logs.go:278] No container was found matching "kindnet"
	I0130 20:44:27.772231   44923 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:44:27.772288   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:44:27.813222   44923 cri.go:89] found id: "e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:27.813259   44923 cri.go:89] found id: "748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:27.813268   44923 cri.go:89] found id: ""
	I0130 20:44:27.813286   44923 logs.go:276] 2 containers: [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446]
	I0130 20:44:27.813347   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.817565   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.821737   44923 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:44:27.821759   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:44:28.299900   44923 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:44:28.299933   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:44:28.441830   44923 logs.go:123] Gathering logs for storage-provisioner [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0] ...
	I0130 20:44:28.441866   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:28.485579   44923 logs.go:123] Gathering logs for dmesg ...
	I0130 20:44:28.485611   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:44:28.500668   44923 logs.go:123] Gathering logs for kube-controller-manager [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f] ...
	I0130 20:44:28.500691   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:28.558472   44923 logs.go:123] Gathering logs for storage-provisioner [748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446] ...
	I0130 20:44:28.558502   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:28.604655   44923 logs.go:123] Gathering logs for kubelet ...
	I0130 20:44:28.604687   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:44:28.670010   44923 logs.go:123] Gathering logs for kube-proxy [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689] ...
	I0130 20:44:28.670041   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:28.712222   44923 logs.go:123] Gathering logs for coredns [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c] ...
	I0130 20:44:28.712259   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:28.764243   44923 logs.go:123] Gathering logs for kube-scheduler [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79] ...
	I0130 20:44:28.764276   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:28.801930   44923 logs.go:123] Gathering logs for container status ...
	I0130 20:44:28.801956   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:44:28.848585   44923 logs.go:123] Gathering logs for kube-apiserver [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e] ...
	I0130 20:44:28.848612   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:28.902903   44923 logs.go:123] Gathering logs for etcd [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901] ...
	I0130 20:44:28.902936   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:30.785050   45819 out.go:204]   - Booting up control plane ...
	I0130 20:44:30.785155   45819 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 20:44:30.790853   45819 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 20:44:30.798657   45819 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 20:44:30.799425   45819 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 20:44:30.801711   45819 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 20:44:30.475584   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:32.973843   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:34.974144   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:31.454103   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:44:31.460009   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 200:
	ok
	I0130 20:44:31.461505   44923 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 20:44:31.461527   44923 api_server.go:131] duration metric: took 4.068214052s to wait for apiserver health ...
	I0130 20:44:31.461537   44923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:44:31.461563   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:44:31.461626   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:44:31.509850   44923 cri.go:89] found id: "ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:31.509874   44923 cri.go:89] found id: ""
	I0130 20:44:31.509884   44923 logs.go:276] 1 containers: [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e]
	I0130 20:44:31.509941   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.514078   44923 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:44:31.514136   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:44:31.555581   44923 cri.go:89] found id: "b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:31.555605   44923 cri.go:89] found id: ""
	I0130 20:44:31.555613   44923 logs.go:276] 1 containers: [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901]
	I0130 20:44:31.555674   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.559888   44923 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:44:31.559948   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:44:31.620256   44923 cri.go:89] found id: "3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:31.620285   44923 cri.go:89] found id: ""
	I0130 20:44:31.620295   44923 logs.go:276] 1 containers: [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c]
	I0130 20:44:31.620352   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.626003   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:44:31.626064   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:44:31.662862   44923 cri.go:89] found id: "39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:31.662889   44923 cri.go:89] found id: ""
	I0130 20:44:31.662899   44923 logs.go:276] 1 containers: [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79]
	I0130 20:44:31.662972   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.668242   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:44:31.668306   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:44:31.717065   44923 cri.go:89] found id: "880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:31.717089   44923 cri.go:89] found id: ""
	I0130 20:44:31.717098   44923 logs.go:276] 1 containers: [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689]
	I0130 20:44:31.717160   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.722195   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:44:31.722250   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:44:31.779789   44923 cri.go:89] found id: "10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:31.779812   44923 cri.go:89] found id: ""
	I0130 20:44:31.779821   44923 logs.go:276] 1 containers: [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f]
	I0130 20:44:31.779894   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.784710   44923 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:44:31.784776   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:44:31.826045   44923 cri.go:89] found id: ""
	I0130 20:44:31.826073   44923 logs.go:276] 0 containers: []
	W0130 20:44:31.826082   44923 logs.go:278] No container was found matching "kindnet"
	I0130 20:44:31.826087   44923 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:44:31.826131   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:44:31.868212   44923 cri.go:89] found id: "e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:31.868236   44923 cri.go:89] found id: "748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:31.868243   44923 cri.go:89] found id: ""
	I0130 20:44:31.868253   44923 logs.go:276] 2 containers: [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446]
	I0130 20:44:31.868314   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.873019   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.877432   44923 logs.go:123] Gathering logs for storage-provisioner [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0] ...
	I0130 20:44:31.877456   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:31.915888   44923 logs.go:123] Gathering logs for storage-provisioner [748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446] ...
	I0130 20:44:31.915915   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:31.972950   44923 logs.go:123] Gathering logs for kubelet ...
	I0130 20:44:31.972978   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:44:32.028993   44923 logs.go:123] Gathering logs for dmesg ...
	I0130 20:44:32.029028   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:44:32.046602   44923 logs.go:123] Gathering logs for etcd [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901] ...
	I0130 20:44:32.046633   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:32.094088   44923 logs.go:123] Gathering logs for kube-proxy [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689] ...
	I0130 20:44:32.094123   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:32.138616   44923 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:44:32.138645   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:44:32.526995   44923 logs.go:123] Gathering logs for container status ...
	I0130 20:44:32.527033   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:44:32.591970   44923 logs.go:123] Gathering logs for kube-apiserver [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e] ...
	I0130 20:44:32.592003   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:32.655438   44923 logs.go:123] Gathering logs for coredns [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c] ...
	I0130 20:44:32.655466   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:32.707131   44923 logs.go:123] Gathering logs for kube-scheduler [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79] ...
	I0130 20:44:32.707163   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:32.749581   44923 logs.go:123] Gathering logs for kube-controller-manager [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f] ...
	I0130 20:44:32.749610   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:32.815778   44923 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:44:32.815805   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:44:35.448121   44923 system_pods.go:59] 8 kube-system pods found
	I0130 20:44:35.448155   44923 system_pods.go:61] "coredns-76f75df574-d4c7t" [a8701b4d-0616-4c05-9ba0-0157adae2d13] Running
	I0130 20:44:35.448162   44923 system_pods.go:61] "etcd-no-preload-473743" [ed931ab3-95d8-4115-ae97-1c274ed8432d] Running
	I0130 20:44:35.448169   44923 system_pods.go:61] "kube-apiserver-no-preload-473743" [64b9b17c-6df5-41db-a308-b0deba016c9d] Running
	I0130 20:44:35.448175   44923 system_pods.go:61] "kube-controller-manager-no-preload-473743" [a28d8dc6-244a-4dfa-9d7f-468281823332] Running
	I0130 20:44:35.448181   44923 system_pods.go:61] "kube-proxy-zklzt" [fa94d19c-b0d6-4e78-86e8-e6b5f3608753] Running
	I0130 20:44:35.448188   44923 system_pods.go:61] "kube-scheduler-no-preload-473743" [b8f8066b-8644-42c3-b47a-52e34210e410] Running
	I0130 20:44:35.448198   44923 system_pods.go:61] "metrics-server-57f55c9bc5-wzb2g" [cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:44:35.448210   44923 system_pods.go:61] "storage-provisioner" [a257b079-cb6e-45fd-b05d-9ad6fa26225e] Running
	I0130 20:44:35.448221   44923 system_pods.go:74] duration metric: took 3.986678023s to wait for pod list to return data ...
	I0130 20:44:35.448227   44923 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:44:35.451377   44923 default_sa.go:45] found service account: "default"
	I0130 20:44:35.451397   44923 default_sa.go:55] duration metric: took 3.162882ms for default service account to be created ...
	I0130 20:44:35.451404   44923 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:44:35.457941   44923 system_pods.go:86] 8 kube-system pods found
	I0130 20:44:35.457962   44923 system_pods.go:89] "coredns-76f75df574-d4c7t" [a8701b4d-0616-4c05-9ba0-0157adae2d13] Running
	I0130 20:44:35.457969   44923 system_pods.go:89] "etcd-no-preload-473743" [ed931ab3-95d8-4115-ae97-1c274ed8432d] Running
	I0130 20:44:35.457976   44923 system_pods.go:89] "kube-apiserver-no-preload-473743" [64b9b17c-6df5-41db-a308-b0deba016c9d] Running
	I0130 20:44:35.457983   44923 system_pods.go:89] "kube-controller-manager-no-preload-473743" [a28d8dc6-244a-4dfa-9d7f-468281823332] Running
	I0130 20:44:35.457992   44923 system_pods.go:89] "kube-proxy-zklzt" [fa94d19c-b0d6-4e78-86e8-e6b5f3608753] Running
	I0130 20:44:35.457999   44923 system_pods.go:89] "kube-scheduler-no-preload-473743" [b8f8066b-8644-42c3-b47a-52e34210e410] Running
	I0130 20:44:35.458013   44923 system_pods.go:89] "metrics-server-57f55c9bc5-wzb2g" [cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:44:35.458023   44923 system_pods.go:89] "storage-provisioner" [a257b079-cb6e-45fd-b05d-9ad6fa26225e] Running
	I0130 20:44:35.458032   44923 system_pods.go:126] duration metric: took 6.622973ms to wait for k8s-apps to be running ...
	I0130 20:44:35.458040   44923 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:44:35.458085   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:35.478158   44923 system_svc.go:56] duration metric: took 20.107762ms WaitForService to wait for kubelet.
	I0130 20:44:35.478182   44923 kubeadm.go:581] duration metric: took 4m24.515659177s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:44:35.478205   44923 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:44:35.481624   44923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:44:35.481649   44923 node_conditions.go:123] node cpu capacity is 2
	I0130 20:44:35.481661   44923 node_conditions.go:105] duration metric: took 3.450762ms to run NodePressure ...
	I0130 20:44:35.481674   44923 start.go:228] waiting for startup goroutines ...
	I0130 20:44:35.481682   44923 start.go:233] waiting for cluster config update ...
	I0130 20:44:35.481695   44923 start.go:242] writing updated cluster config ...
	I0130 20:44:35.481966   44923 ssh_runner.go:195] Run: rm -f paused
	I0130 20:44:35.534192   44923 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0130 20:44:35.537286   44923 out.go:177] * Done! kubectl is now configured to use "no-preload-473743" cluster and "default" namespace by default
	I0130 20:44:36.975176   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:39.472594   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:40.808532   45819 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.005048 seconds
	I0130 20:44:40.808703   45819 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 20:44:40.821445   45819 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 20:44:41.350196   45819 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 20:44:41.350372   45819 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-150971 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0130 20:44:41.859169   45819 kubeadm.go:322] [bootstrap-token] Using token: vlkrdr.8ubylscclgt88ll2
	I0130 20:44:41.862311   45819 out.go:204]   - Configuring RBAC rules ...
	I0130 20:44:41.862450   45819 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 20:44:41.870072   45819 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 20:44:41.874429   45819 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 20:44:41.883936   45819 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 20:44:41.887738   45819 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 20:44:41.963361   45819 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 20:44:42.299030   45819 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 20:44:42.300623   45819 kubeadm.go:322] 
	I0130 20:44:42.300708   45819 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 20:44:42.300721   45819 kubeadm.go:322] 
	I0130 20:44:42.300820   45819 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 20:44:42.300845   45819 kubeadm.go:322] 
	I0130 20:44:42.300886   45819 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 20:44:42.300975   45819 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 20:44:42.301048   45819 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 20:44:42.301061   45819 kubeadm.go:322] 
	I0130 20:44:42.301126   45819 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 20:44:42.301241   45819 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 20:44:42.301309   45819 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 20:44:42.301326   45819 kubeadm.go:322] 
	I0130 20:44:42.301417   45819 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0130 20:44:42.301482   45819 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 20:44:42.301488   45819 kubeadm.go:322] 
	I0130 20:44:42.301554   45819 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vlkrdr.8ubylscclgt88ll2 \
	I0130 20:44:42.301684   45819 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 \
	I0130 20:44:42.301717   45819 kubeadm.go:322]     --control-plane 	  
	I0130 20:44:42.301726   45819 kubeadm.go:322] 
	I0130 20:44:42.301827   45819 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 20:44:42.301844   45819 kubeadm.go:322] 
	I0130 20:44:42.301984   45819 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vlkrdr.8ubylscclgt88ll2 \
	I0130 20:44:42.302116   45819 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 
	I0130 20:44:42.302689   45819 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 20:44:42.302726   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:44:42.302739   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:44:42.305197   45819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:44:42.306389   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:44:42.357619   45819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:44:42.381081   45819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:44:42.381189   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:42.381196   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218 minikube.k8s.io/name=old-k8s-version-150971 minikube.k8s.io/updated_at=2024_01_30T20_44_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:42.406368   45819 ops.go:34] apiserver oom_adj: -16
	I0130 20:44:42.639356   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:43.139439   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:43.640260   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:44.140080   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:44.639587   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:41.473598   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:43.474059   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:45.140354   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:45.640062   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:46.140282   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:46.639400   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:47.140308   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:47.640045   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:48.139406   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:48.640423   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:49.139702   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:49.640036   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:45.973530   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:47.974364   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:49.974551   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:50.139435   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:50.639471   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:51.140088   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:51.639444   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:52.139401   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:52.639731   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:53.140050   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:53.639411   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:54.139942   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:54.640279   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:52.473624   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:54.474924   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:55.139610   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:55.639431   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:56.140267   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:56.639444   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:57.140068   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:57.296527   45819 kubeadm.go:1088] duration metric: took 14.915402679s to wait for elevateKubeSystemPrivileges.
	I0130 20:44:57.296567   45819 kubeadm.go:406] StartCluster complete in 5m42.316503122s
	I0130 20:44:57.296588   45819 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:57.296672   45819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:44:57.298762   45819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:57.299005   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:44:57.299123   45819 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:44:57.299208   45819 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-150971"
	I0130 20:44:57.299220   45819 config.go:182] Loaded profile config "old-k8s-version-150971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 20:44:57.299229   45819 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-150971"
	W0130 20:44:57.299241   45819 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:44:57.299220   45819 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-150971"
	I0130 20:44:57.299300   45819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-150971"
	I0130 20:44:57.299315   45819 host.go:66] Checking if "old-k8s-version-150971" exists ...
	I0130 20:44:57.299247   45819 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-150971"
	I0130 20:44:57.299387   45819 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-150971"
	W0130 20:44:57.299397   45819 addons.go:243] addon metrics-server should already be in state true
	I0130 20:44:57.299433   45819 host.go:66] Checking if "old-k8s-version-150971" exists ...
	I0130 20:44:57.299705   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.299726   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.299756   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.299760   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.299796   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.299897   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.319159   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38823
	I0130 20:44:57.319202   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45589
	I0130 20:44:57.319167   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34823
	I0130 20:44:57.319578   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.319707   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.319771   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.320071   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.320103   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.320242   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.320261   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.320408   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.320423   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.320586   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.320630   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.321140   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.321158   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.321591   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.321624   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.321675   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.321705   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.325091   45819 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-150971"
	W0130 20:44:57.325106   45819 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:44:57.325125   45819 host.go:66] Checking if "old-k8s-version-150971" exists ...
	I0130 20:44:57.325420   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.325442   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.342652   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
	I0130 20:44:57.342787   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41961
	I0130 20:44:57.343203   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.343303   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.343745   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.343779   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.343848   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44027
	I0130 20:44:57.343887   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.343903   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.344220   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.344244   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.344220   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.344493   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.344494   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.344707   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.344730   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.345083   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.346139   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.346172   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.346830   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:44:57.346891   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:44:57.348974   45819 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:44:57.350330   45819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:44:57.350364   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:44:57.351707   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:44:57.351729   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:44:57.351684   45819 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:57.351795   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:44:57.351821   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:44:57.356145   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.356428   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.356595   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:44:57.356621   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.356767   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:44:57.357040   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:44:57.357095   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:44:57.357123   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.357218   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:44:57.357266   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:44:57.357458   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:44:57.357451   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:44:57.357617   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:44:57.357754   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:44:57.362806   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I0130 20:44:57.363167   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.363742   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.363770   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.364074   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.364280   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.365877   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:44:57.366086   45819 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:57.366096   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:44:57.366107   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:44:57.369237   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.369890   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:44:57.369930   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:44:57.369968   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.370351   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:44:57.370563   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:44:57.370712   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:44:57.509329   45819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:57.535146   45819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:57.536528   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 20:44:57.559042   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:44:57.559066   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:44:57.643054   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:44:57.643081   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:44:57.773561   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:57.773588   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:44:57.848668   45819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:57.910205   45819 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-150971" context rescaled to 1 replicas
	I0130 20:44:57.910247   45819 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:44:57.912390   45819 out.go:177] * Verifying Kubernetes components...
	I0130 20:44:57.913764   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:58.721986   45819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.186811658s)
	I0130 20:44:58.722033   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722045   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722145   45819 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.185575635s)
	I0130 20:44:58.722210   45819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.212845439s)
	I0130 20:44:58.722213   45819 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0130 20:44:58.722254   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722271   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722347   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.722359   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.722371   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.722381   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722391   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722537   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.722576   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.722593   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.722611   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722621   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722659   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.722675   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.724251   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.724291   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.724304   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.798383   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.798410   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.798745   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.798767   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.798816   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:59.125243   45819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.276531373s)
	I0130 20:44:59.125305   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:59.125322   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:59.125256   45819 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.211465342s)
	I0130 20:44:59.125360   45819 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-150971" to be "Ready" ...
	I0130 20:44:59.125612   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:59.125639   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:59.125650   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:59.125650   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:59.125659   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:59.125902   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:59.125953   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:59.125963   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:59.125972   45819 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-150971"
	I0130 20:44:59.127634   45819 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 20:44:59.129415   45819 addons.go:505] enable addons completed in 1.830294624s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 20:44:59.141691   45819 node_ready.go:49] node "old-k8s-version-150971" has status "Ready":"True"
	I0130 20:44:59.141715   45819 node_ready.go:38] duration metric: took 16.331635ms waiting for node "old-k8s-version-150971" to be "Ready" ...
	I0130 20:44:59.141725   45819 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:59.146645   45819 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-7qhmc" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:56.475086   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:58.973370   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:00.161718   45819 pod_ready.go:92] pod "coredns-5644d7b6d9-7qhmc" in "kube-system" namespace has status "Ready":"True"
	I0130 20:45:00.161741   45819 pod_ready.go:81] duration metric: took 1.015069343s waiting for pod "coredns-5644d7b6d9-7qhmc" in "kube-system" namespace to be "Ready" ...
	I0130 20:45:00.161754   45819 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zbdxm" in "kube-system" namespace to be "Ready" ...
	I0130 20:45:00.668280   45819 pod_ready.go:92] pod "kube-proxy-zbdxm" in "kube-system" namespace has status "Ready":"True"
	I0130 20:45:00.668313   45819 pod_ready.go:81] duration metric: took 506.550797ms waiting for pod "kube-proxy-zbdxm" in "kube-system" namespace to be "Ready" ...
	I0130 20:45:00.668328   45819 pod_ready.go:38] duration metric: took 1.526591158s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:45:00.668343   45819 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:45:00.668398   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:45:00.682119   45819 api_server.go:72] duration metric: took 2.771845703s to wait for apiserver process to appear ...
	I0130 20:45:00.682143   45819 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:45:00.682167   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:45:00.687603   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0130 20:45:00.688287   45819 api_server.go:141] control plane version: v1.16.0
	I0130 20:45:00.688302   45819 api_server.go:131] duration metric: took 6.153997ms to wait for apiserver health ...
	I0130 20:45:00.688309   45819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:45:00.691917   45819 system_pods.go:59] 4 kube-system pods found
	I0130 20:45:00.691936   45819 system_pods.go:61] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:00.691942   45819 system_pods.go:61] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:00.691948   45819 system_pods.go:61] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:00.691954   45819 system_pods.go:61] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:45:00.691962   45819 system_pods.go:74] duration metric: took 3.648521ms to wait for pod list to return data ...
	I0130 20:45:00.691970   45819 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:45:00.694229   45819 default_sa.go:45] found service account: "default"
	I0130 20:45:00.694250   45819 default_sa.go:55] duration metric: took 2.274248ms for default service account to be created ...
	I0130 20:45:00.694258   45819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:45:00.698156   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:00.698179   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:00.698187   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:00.698198   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:00.698210   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:45:00.698234   45819 retry.go:31] will retry after 277.03208ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:00.979637   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:00.979660   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:00.979665   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:00.979671   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:00.979677   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:45:00.979694   45819 retry.go:31] will retry after 341.469517ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:01.326631   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:01.326666   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:01.326674   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:01.326683   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:01.326689   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:01.326713   45819 retry.go:31] will retry after 487.104661ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:01.818702   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:01.818733   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:01.818742   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:01.818752   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:01.818759   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:01.818779   45819 retry.go:31] will retry after 574.423042ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:02.398901   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:02.398940   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:02.398949   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:02.398959   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:02.398966   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:02.398986   45819 retry.go:31] will retry after 741.538469ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:03.145137   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:03.145162   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:03.145168   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:03.145174   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:03.145179   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:03.145194   45819 retry.go:31] will retry after 742.915086ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:03.892722   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:03.892748   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:03.892753   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:03.892759   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:03.892764   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:03.892779   45819 retry.go:31] will retry after 786.727719ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:01.473056   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:03.473346   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:04.685933   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:04.685967   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:04.685976   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:04.685985   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:04.685993   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:04.686016   45819 retry.go:31] will retry after 1.232157955s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:05.923020   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:05.923045   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:05.923050   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:05.923056   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:05.923061   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:05.923076   45819 retry.go:31] will retry after 1.652424416s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:07.580982   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:07.581007   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:07.581013   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:07.581019   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:07.581026   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:07.581042   45819 retry.go:31] will retry after 1.774276151s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:09.360073   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:09.360098   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:09.360103   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:09.360110   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:09.360115   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:09.360133   45819 retry.go:31] will retry after 2.786181653s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:05.975152   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:07.975274   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:12.151191   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:12.151215   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:12.151221   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:12.151227   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:12.151232   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:12.151258   45819 retry.go:31] will retry after 3.456504284s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:10.472793   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:12.474310   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:14.977715   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:15.613679   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:15.613705   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:15.613711   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:15.613718   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:15.613722   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:15.613741   45819 retry.go:31] will retry after 4.434906632s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:17.472993   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:19.473530   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:20.053023   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:20.053050   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:20.053055   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:20.053062   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:20.053066   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:20.053082   45819 retry.go:31] will retry after 3.910644554s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:23.969998   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:23.970027   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:23.970035   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:23.970047   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:23.970053   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:23.970075   45819 retry.go:31] will retry after 4.907431581s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:21.473946   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:23.973965   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:28.881886   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:28.881911   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:28.881917   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:28.881924   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:28.881929   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:28.881956   45819 retry.go:31] will retry after 7.594967181s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:26.473519   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:28.474676   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:30.972445   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:32.973156   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:34.973590   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:36.482226   45819 system_pods.go:86] 5 kube-system pods found
	I0130 20:45:36.482255   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:36.482261   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:36.482267   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Pending
	I0130 20:45:36.482277   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:36.482284   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:36.482306   45819 retry.go:31] will retry after 8.875079493s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:36.974189   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:39.474803   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:41.973709   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:43.974130   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:45.361733   45819 system_pods.go:86] 5 kube-system pods found
	I0130 20:45:45.361760   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:45.361766   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:45.361772   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:45:45.361781   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:45.361789   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:45.361820   45819 retry.go:31] will retry after 9.918306048s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0130 20:45:45.976853   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:48.476619   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:50.974748   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:52.975900   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:55.285765   45819 system_pods.go:86] 6 kube-system pods found
	I0130 20:45:55.285793   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:55.285801   45819 system_pods.go:89] "kube-apiserver-old-k8s-version-150971" [14975616-ba41-4199-b0e3-179dc01def2d] Pending
	I0130 20:45:55.285807   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:55.285813   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:45:55.285822   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:55.285828   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:55.285849   45819 retry.go:31] will retry after 12.684125727s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0130 20:45:55.473705   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:57.973533   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:59.974108   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:02.473825   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:04.973953   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:07.975898   45819 system_pods.go:86] 8 kube-system pods found
	I0130 20:46:07.975923   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:46:07.975929   45819 system_pods.go:89] "etcd-old-k8s-version-150971" [21884345-e587-4bae-88b9-78e0bdacf954] Running
	I0130 20:46:07.975933   45819 system_pods.go:89] "kube-apiserver-old-k8s-version-150971" [14975616-ba41-4199-b0e3-179dc01def2d] Running
	I0130 20:46:07.975937   45819 system_pods.go:89] "kube-controller-manager-old-k8s-version-150971" [f0cfbd77-f00e-4d40-a301-f24f6ed937e1] Pending
	I0130 20:46:07.975941   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:46:07.975944   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:46:07.975951   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:46:07.975955   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:46:07.975969   45819 retry.go:31] will retry after 15.59894457s: missing components: kube-controller-manager
	I0130 20:46:07.472712   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:09.474175   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:11.478228   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:13.973190   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:16.473264   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:18.474418   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:23.581862   45819 system_pods.go:86] 8 kube-system pods found
	I0130 20:46:23.581890   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:46:23.581895   45819 system_pods.go:89] "etcd-old-k8s-version-150971" [21884345-e587-4bae-88b9-78e0bdacf954] Running
	I0130 20:46:23.581899   45819 system_pods.go:89] "kube-apiserver-old-k8s-version-150971" [14975616-ba41-4199-b0e3-179dc01def2d] Running
	I0130 20:46:23.581904   45819 system_pods.go:89] "kube-controller-manager-old-k8s-version-150971" [f0cfbd77-f00e-4d40-a301-f24f6ed937e1] Running
	I0130 20:46:23.581907   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:46:23.581911   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:46:23.581918   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:46:23.581923   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:46:23.581932   45819 system_pods.go:126] duration metric: took 1m22.887668504s to wait for k8s-apps to be running ...
	I0130 20:46:23.581939   45819 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:46:23.581986   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:46:23.604099   45819 system_svc.go:56] duration metric: took 22.14886ms WaitForService to wait for kubelet.
	I0130 20:46:23.604134   45819 kubeadm.go:581] duration metric: took 1m25.693865663s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:46:23.604159   45819 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:46:23.607539   45819 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:46:23.607567   45819 node_conditions.go:123] node cpu capacity is 2
	I0130 20:46:23.607580   45819 node_conditions.go:105] duration metric: took 3.415829ms to run NodePressure ...
	I0130 20:46:23.607594   45819 start.go:228] waiting for startup goroutines ...
	I0130 20:46:23.607602   45819 start.go:233] waiting for cluster config update ...
	I0130 20:46:23.607615   45819 start.go:242] writing updated cluster config ...
	I0130 20:46:23.607933   45819 ssh_runner.go:195] Run: rm -f paused
	I0130 20:46:23.658357   45819 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0130 20:46:23.660375   45819 out.go:177] 
	W0130 20:46:23.661789   45819 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0130 20:46:23.663112   45819 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0130 20:46:23.664623   45819 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-150971" cluster and "default" namespace by default
	I0130 20:46:20.474791   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:22.973143   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:24.974320   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:27.474508   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:29.973471   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:31.973727   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:33.974180   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:36.472928   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:38.474336   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:40.973509   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:42.973942   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:45.473120   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:47.972943   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:49.973756   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:51.973913   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:54.472597   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:56.473076   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:58.974262   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:01.476906   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:03.974275   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:06.474453   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:08.973144   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:10.973407   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:12.974842   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:15.473765   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:17.474938   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:19.973849   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:21.974660   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:23.977144   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:26.479595   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:28.975572   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:31.473715   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:33.974243   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:36.472321   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:38.473133   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:40.973786   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:43.473691   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:45.476882   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:47.975923   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:50.474045   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:52.474411   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:54.474531   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:56.973542   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:58.974226   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:00.975045   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:03.473440   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:05.473667   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:07.973417   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:09.978199   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:09.978230   45441 pod_ready.go:81] duration metric: took 4m0.012361166s waiting for pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace to be "Ready" ...
	E0130 20:48:09.978243   45441 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 20:48:09.978253   45441 pod_ready.go:38] duration metric: took 4m1.998529694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:48:09.978276   45441 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:48:09.978323   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:48:09.978403   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:48:10.038921   45441 cri.go:89] found id: "39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:10.038949   45441 cri.go:89] found id: ""
	I0130 20:48:10.038958   45441 logs.go:276] 1 containers: [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481]
	I0130 20:48:10.039017   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.043851   45441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:48:10.043902   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:48:10.088920   45441 cri.go:89] found id: "1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:10.088945   45441 cri.go:89] found id: ""
	I0130 20:48:10.088952   45441 logs.go:276] 1 containers: [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15]
	I0130 20:48:10.089001   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.094186   45441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:48:10.094267   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:48:10.143350   45441 cri.go:89] found id: "215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:10.143380   45441 cri.go:89] found id: ""
	I0130 20:48:10.143390   45441 logs.go:276] 1 containers: [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb]
	I0130 20:48:10.143450   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.148357   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:48:10.148426   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:48:10.187812   45441 cri.go:89] found id: "8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:10.187848   45441 cri.go:89] found id: ""
	I0130 20:48:10.187858   45441 logs.go:276] 1 containers: [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7]
	I0130 20:48:10.187914   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.192049   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:48:10.192109   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:48:10.241052   45441 cri.go:89] found id: "c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:10.241079   45441 cri.go:89] found id: ""
	I0130 20:48:10.241088   45441 logs.go:276] 1 containers: [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe]
	I0130 20:48:10.241139   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.245711   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:48:10.245763   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:48:10.287115   45441 cri.go:89] found id: "1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:10.287139   45441 cri.go:89] found id: ""
	I0130 20:48:10.287148   45441 logs.go:276] 1 containers: [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed]
	I0130 20:48:10.287194   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.291627   45441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:48:10.291697   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:48:10.341321   45441 cri.go:89] found id: ""
	I0130 20:48:10.341346   45441 logs.go:276] 0 containers: []
	W0130 20:48:10.341356   45441 logs.go:278] No container was found matching "kindnet"
	I0130 20:48:10.341362   45441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:48:10.341420   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:48:10.385515   45441 cri.go:89] found id: "f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:10.385543   45441 cri.go:89] found id: ""
	I0130 20:48:10.385552   45441 logs.go:276] 1 containers: [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06]
	I0130 20:48:10.385601   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.390397   45441 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:48:10.390433   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:48:10.832689   45441 logs.go:123] Gathering logs for dmesg ...
	I0130 20:48:10.832724   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:48:10.846560   45441 logs.go:123] Gathering logs for storage-provisioner [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06] ...
	I0130 20:48:10.846587   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:10.887801   45441 logs.go:123] Gathering logs for kube-apiserver [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481] ...
	I0130 20:48:10.887826   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:10.942977   45441 logs.go:123] Gathering logs for etcd [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15] ...
	I0130 20:48:10.943003   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:10.987642   45441 logs.go:123] Gathering logs for coredns [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb] ...
	I0130 20:48:10.987669   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:11.024934   45441 logs.go:123] Gathering logs for kube-scheduler [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7] ...
	I0130 20:48:11.024964   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:11.076336   45441 logs.go:123] Gathering logs for kube-proxy [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe] ...
	I0130 20:48:11.076373   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:11.127315   45441 logs.go:123] Gathering logs for kube-controller-manager [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed] ...
	I0130 20:48:11.127344   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:11.182944   45441 logs.go:123] Gathering logs for kubelet ...
	I0130 20:48:11.182974   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:48:11.276494   45441 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:48:11.276525   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:48:11.413186   45441 logs.go:123] Gathering logs for container status ...
	I0130 20:48:11.413213   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:48:13.960537   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:48:13.977332   45441 api_server.go:72] duration metric: took 4m8.11544723s to wait for apiserver process to appear ...
	I0130 20:48:13.977362   45441 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:48:13.977400   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:48:13.977466   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:48:14.025510   45441 cri.go:89] found id: "39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:14.025534   45441 cri.go:89] found id: ""
	I0130 20:48:14.025542   45441 logs.go:276] 1 containers: [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481]
	I0130 20:48:14.025593   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.030025   45441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:48:14.030103   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:48:14.070504   45441 cri.go:89] found id: "1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:14.070524   45441 cri.go:89] found id: ""
	I0130 20:48:14.070531   45441 logs.go:276] 1 containers: [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15]
	I0130 20:48:14.070577   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.074858   45441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:48:14.074928   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:48:14.110816   45441 cri.go:89] found id: "215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:14.110844   45441 cri.go:89] found id: ""
	I0130 20:48:14.110853   45441 logs.go:276] 1 containers: [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb]
	I0130 20:48:14.110912   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.114997   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:48:14.115079   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:48:14.169213   45441 cri.go:89] found id: "8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:14.169240   45441 cri.go:89] found id: ""
	I0130 20:48:14.169249   45441 logs.go:276] 1 containers: [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7]
	I0130 20:48:14.169305   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.173541   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:48:14.173607   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:48:14.210634   45441 cri.go:89] found id: "c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:14.210657   45441 cri.go:89] found id: ""
	I0130 20:48:14.210664   45441 logs.go:276] 1 containers: [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe]
	I0130 20:48:14.210717   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.215015   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:48:14.215074   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:48:14.258454   45441 cri.go:89] found id: "1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:14.258477   45441 cri.go:89] found id: ""
	I0130 20:48:14.258484   45441 logs.go:276] 1 containers: [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed]
	I0130 20:48:14.258532   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.262486   45441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:48:14.262537   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:48:14.302175   45441 cri.go:89] found id: ""
	I0130 20:48:14.302205   45441 logs.go:276] 0 containers: []
	W0130 20:48:14.302213   45441 logs.go:278] No container was found matching "kindnet"
	I0130 20:48:14.302218   45441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:48:14.302262   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:48:14.339497   45441 cri.go:89] found id: "f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:14.339523   45441 cri.go:89] found id: ""
	I0130 20:48:14.339533   45441 logs.go:276] 1 containers: [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06]
	I0130 20:48:14.339589   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.343954   45441 logs.go:123] Gathering logs for kube-apiserver [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481] ...
	I0130 20:48:14.343983   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:14.391168   45441 logs.go:123] Gathering logs for coredns [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb] ...
	I0130 20:48:14.391203   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:14.436713   45441 logs.go:123] Gathering logs for kube-proxy [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe] ...
	I0130 20:48:14.436743   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:14.473899   45441 logs.go:123] Gathering logs for kube-controller-manager [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed] ...
	I0130 20:48:14.473934   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:14.533733   45441 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:48:14.533763   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:48:14.924087   45441 logs.go:123] Gathering logs for container status ...
	I0130 20:48:14.924121   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:48:14.972652   45441 logs.go:123] Gathering logs for kubelet ...
	I0130 20:48:14.972684   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:48:15.074398   45441 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:48:15.074443   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:48:15.206993   45441 logs.go:123] Gathering logs for kube-scheduler [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7] ...
	I0130 20:48:15.207026   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:15.258807   45441 logs.go:123] Gathering logs for storage-provisioner [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06] ...
	I0130 20:48:15.258841   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:15.299162   45441 logs.go:123] Gathering logs for dmesg ...
	I0130 20:48:15.299209   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:48:15.315611   45441 logs.go:123] Gathering logs for etcd [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15] ...
	I0130 20:48:15.315643   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:17.859914   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:48:17.865483   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 200:
	ok
	I0130 20:48:17.866876   45441 api_server.go:141] control plane version: v1.28.4
	I0130 20:48:17.866899   45441 api_server.go:131] duration metric: took 3.889528289s to wait for apiserver health ...
	I0130 20:48:17.866910   45441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:48:17.866937   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:48:17.866992   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:48:17.907357   45441 cri.go:89] found id: "39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:17.907386   45441 cri.go:89] found id: ""
	I0130 20:48:17.907396   45441 logs.go:276] 1 containers: [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481]
	I0130 20:48:17.907461   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:17.911558   45441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:48:17.911617   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:48:17.948725   45441 cri.go:89] found id: "1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:17.948747   45441 cri.go:89] found id: ""
	I0130 20:48:17.948757   45441 logs.go:276] 1 containers: [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15]
	I0130 20:48:17.948819   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:17.953304   45441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:48:17.953365   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:48:17.994059   45441 cri.go:89] found id: "215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:17.994091   45441 cri.go:89] found id: ""
	I0130 20:48:17.994101   45441 logs.go:276] 1 containers: [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb]
	I0130 20:48:17.994158   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:17.998347   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:48:17.998402   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:48:18.047814   45441 cri.go:89] found id: "8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:18.047842   45441 cri.go:89] found id: ""
	I0130 20:48:18.047853   45441 logs.go:276] 1 containers: [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7]
	I0130 20:48:18.047914   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.052864   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:48:18.052927   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:48:18.091597   45441 cri.go:89] found id: "c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:18.091617   45441 cri.go:89] found id: ""
	I0130 20:48:18.091625   45441 logs.go:276] 1 containers: [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe]
	I0130 20:48:18.091680   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.095921   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:48:18.096034   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:48:18.146922   45441 cri.go:89] found id: "1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:18.146942   45441 cri.go:89] found id: ""
	I0130 20:48:18.146952   45441 logs.go:276] 1 containers: [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed]
	I0130 20:48:18.147002   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.156610   45441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:48:18.156671   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:48:18.209680   45441 cri.go:89] found id: ""
	I0130 20:48:18.209701   45441 logs.go:276] 0 containers: []
	W0130 20:48:18.209711   45441 logs.go:278] No container was found matching "kindnet"
	I0130 20:48:18.209716   45441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:48:18.209761   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:48:18.253810   45441 cri.go:89] found id: "f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:18.253834   45441 cri.go:89] found id: ""
	I0130 20:48:18.253841   45441 logs.go:276] 1 containers: [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06]
	I0130 20:48:18.253883   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.258404   45441 logs.go:123] Gathering logs for storage-provisioner [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06] ...
	I0130 20:48:18.258433   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:18.305088   45441 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:48:18.305117   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:48:18.629911   45441 logs.go:123] Gathering logs for container status ...
	I0130 20:48:18.629948   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:48:18.677758   45441 logs.go:123] Gathering logs for kubelet ...
	I0130 20:48:18.677787   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:48:18.779831   45441 logs.go:123] Gathering logs for dmesg ...
	I0130 20:48:18.779869   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:48:18.795995   45441 logs.go:123] Gathering logs for kube-apiserver [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481] ...
	I0130 20:48:18.796024   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:18.844003   45441 logs.go:123] Gathering logs for coredns [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb] ...
	I0130 20:48:18.844034   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:18.884617   45441 logs.go:123] Gathering logs for kube-scheduler [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7] ...
	I0130 20:48:18.884645   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:18.931556   45441 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:48:18.931591   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:48:19.066569   45441 logs.go:123] Gathering logs for etcd [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15] ...
	I0130 20:48:19.066606   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:19.115012   45441 logs.go:123] Gathering logs for kube-proxy [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe] ...
	I0130 20:48:19.115041   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:19.169107   45441 logs.go:123] Gathering logs for kube-controller-manager [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed] ...
	I0130 20:48:19.169137   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:21.731792   45441 system_pods.go:59] 8 kube-system pods found
	I0130 20:48:21.731816   45441 system_pods.go:61] "coredns-5dd5756b68-tlb8h" [547c1fe4-3ef7-421a-b460-660a05caa2ab] Running
	I0130 20:48:21.731821   45441 system_pods.go:61] "etcd-default-k8s-diff-port-877742" [a8ff44ad-5fec-415b-a574-75bce55acf8e] Running
	I0130 20:48:21.731826   45441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-877742" [b183118a-5376-412c-a991-eaebf0e6a46e] Running
	I0130 20:48:21.731830   45441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-877742" [cd5170b0-7d1c-45fd-9670-376d04e7016b] Running
	I0130 20:48:21.731834   45441 system_pods.go:61] "kube-proxy-59zvd" [ca6ef754-0898-4e1d-9ff2-9f42f456db6c] Running
	I0130 20:48:21.731838   45441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-877742" [5870d68e-b7af-408b-9484-a7e414bbe7f7] Running
	I0130 20:48:21.731845   45441 system_pods.go:61] "metrics-server-57f55c9bc5-xjc2m" [7b9a273b-d328-4ae8-925e-5bb305cfe574] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:48:21.731853   45441 system_pods.go:61] "storage-provisioner" [db1a28e4-0c45-496e-a566-32a402b0841d] Running
	I0130 20:48:21.731862   45441 system_pods.go:74] duration metric: took 3.864945598s to wait for pod list to return data ...
	I0130 20:48:21.731878   45441 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:48:21.734586   45441 default_sa.go:45] found service account: "default"
	I0130 20:48:21.734604   45441 default_sa.go:55] duration metric: took 2.721611ms for default service account to be created ...
	I0130 20:48:21.734611   45441 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:48:21.740794   45441 system_pods.go:86] 8 kube-system pods found
	I0130 20:48:21.740817   45441 system_pods.go:89] "coredns-5dd5756b68-tlb8h" [547c1fe4-3ef7-421a-b460-660a05caa2ab] Running
	I0130 20:48:21.740822   45441 system_pods.go:89] "etcd-default-k8s-diff-port-877742" [a8ff44ad-5fec-415b-a574-75bce55acf8e] Running
	I0130 20:48:21.740827   45441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-877742" [b183118a-5376-412c-a991-eaebf0e6a46e] Running
	I0130 20:48:21.740831   45441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-877742" [cd5170b0-7d1c-45fd-9670-376d04e7016b] Running
	I0130 20:48:21.740835   45441 system_pods.go:89] "kube-proxy-59zvd" [ca6ef754-0898-4e1d-9ff2-9f42f456db6c] Running
	I0130 20:48:21.740840   45441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-877742" [5870d68e-b7af-408b-9484-a7e414bbe7f7] Running
	I0130 20:48:21.740846   45441 system_pods.go:89] "metrics-server-57f55c9bc5-xjc2m" [7b9a273b-d328-4ae8-925e-5bb305cfe574] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:48:21.740853   45441 system_pods.go:89] "storage-provisioner" [db1a28e4-0c45-496e-a566-32a402b0841d] Running
	I0130 20:48:21.740860   45441 system_pods.go:126] duration metric: took 6.244006ms to wait for k8s-apps to be running ...
	I0130 20:48:21.740867   45441 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:48:21.740906   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:48:21.756380   45441 system_svc.go:56] duration metric: took 15.505755ms WaitForService to wait for kubelet.
	I0130 20:48:21.756405   45441 kubeadm.go:581] duration metric: took 4m15.894523943s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:48:21.756429   45441 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:48:21.759579   45441 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:48:21.759605   45441 node_conditions.go:123] node cpu capacity is 2
	I0130 20:48:21.759616   45441 node_conditions.go:105] duration metric: took 3.182491ms to run NodePressure ...
	I0130 20:48:21.759626   45441 start.go:228] waiting for startup goroutines ...
	I0130 20:48:21.759632   45441 start.go:233] waiting for cluster config update ...
	I0130 20:48:21.759642   45441 start.go:242] writing updated cluster config ...
	I0130 20:48:21.759879   45441 ssh_runner.go:195] Run: rm -f paused
	I0130 20:48:21.808471   45441 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 20:48:21.810628   45441 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-877742" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 20:39:19 UTC, ends at Tue 2024-01-30 20:53:37 UTC. --
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.334090913Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=15caa244-4fc6-4845-87e8-815a072b3f65 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.333989468Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0,PodSandboxId:d4e4e386d23faec07722b81d92baaa13efa13c229bfaffe1133538f6ecead0d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706647239580537453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a257b079-cb6e-45fd-b05d-9ad6fa26225e,},Annotations:map[string]string{io.kubernetes.container.hash: 206f44ea,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfced8166a62235ba8bd8f0e9ca8b9e9f0091b8c09a3c98cd911949b909a9c1,PodSandboxId:a5442d98c12249d2769c781d5742a3a9c767e3c98ce51408824252f8aeba62d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706647219937616575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76483155-3957-4487-a0a8-7c5511ea5fe4,},Annotations:map[string]string{io.kubernetes.container.hash: e27fea54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c,PodSandboxId:2017da92eac5d4134322e2004f54a0fdd411e91da80cdb0b8389fdb2939bf97f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706647216685352196,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d4c7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8701b4d-0616-4c05-9ba0-0157adae2d13,},Annotations:map[string]string{io.kubernetes.container.hash: 6e86a31,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689,PodSandboxId:14211c17a6df66ba5c4755ea6a1e75792b91164fb9fe5ca98be81375944c9f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706647209233271888,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zklzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa94d19c-b0
d6-4e78-86e8-e6b5f3608753,},Annotations:map[string]string{io.kubernetes.container.hash: bf8c81b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446,PodSandboxId:d4e4e386d23faec07722b81d92baaa13efa13c229bfaffe1133538f6ecead0d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1706647209212984933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a257b079-cb6e
-45fd-b05d-9ad6fa26225e,},Annotations:map[string]string{io.kubernetes.container.hash: 206f44ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79,PodSandboxId:c59ff4568d34bc5d105303772e883a0753cf4d6af6f33195eff49cccbcf1bdf7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1706647202959064062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a114725bb58f16fe05b4
0766dfd675a2,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901,PodSandboxId:9f5658af0abd0f8fa497b88b01ec774c36f40a91077d023cdeb081102e38d3c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706647202816601449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c77b1d2fc69e7744c0b3663b58046a,},Annotations:map[string]string{io.kub
ernetes.container.hash: d2a09030,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f,PodSandboxId:29e36a5e06e206529459dcca763c0e35d28f1f59e0ae964c029b4b3e41299293,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706647202673676774,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b849b15baa44349c67e242be9c74523,},Annotations
:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e,PodSandboxId:aac3d267dd8222b3a9325c82f372f6ca00aa95918efdb03fd4e186bc1a0317ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706647202353223356,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0488c96715580f546d9b840aeeef0809,},Annotations:map[string
]string{io.kubernetes.container.hash: 91e80384,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=33aa3167-fa12-426b-9dd8-4893c1fdf5f2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.334590358Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648017334570971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=15caa244-4fc6-4845-87e8-815a072b3f65 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.335565370Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=07db129d-8acc-43be-a877-edfc804e7696 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.336008575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=07db129d-8acc-43be-a877-edfc804e7696 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.336753455Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0,PodSandboxId:d4e4e386d23faec07722b81d92baaa13efa13c229bfaffe1133538f6ecead0d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706647239580537453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a257b079-cb6e-45fd-b05d-9ad6fa26225e,},Annotations:map[string]string{io.kubernetes.container.hash: 206f44ea,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfced8166a62235ba8bd8f0e9ca8b9e9f0091b8c09a3c98cd911949b909a9c1,PodSandboxId:a5442d98c12249d2769c781d5742a3a9c767e3c98ce51408824252f8aeba62d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706647219937616575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76483155-3957-4487-a0a8-7c5511ea5fe4,},Annotations:map[string]string{io.kubernetes.container.hash: e27fea54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c,PodSandboxId:2017da92eac5d4134322e2004f54a0fdd411e91da80cdb0b8389fdb2939bf97f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706647216685352196,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d4c7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8701b4d-0616-4c05-9ba0-0157adae2d13,},Annotations:map[string]string{io.kubernetes.container.hash: 6e86a31,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689,PodSandboxId:14211c17a6df66ba5c4755ea6a1e75792b91164fb9fe5ca98be81375944c9f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706647209233271888,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zklzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa94d19c-b0
d6-4e78-86e8-e6b5f3608753,},Annotations:map[string]string{io.kubernetes.container.hash: bf8c81b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446,PodSandboxId:d4e4e386d23faec07722b81d92baaa13efa13c229bfaffe1133538f6ecead0d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1706647209212984933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a257b079-cb6e
-45fd-b05d-9ad6fa26225e,},Annotations:map[string]string{io.kubernetes.container.hash: 206f44ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79,PodSandboxId:c59ff4568d34bc5d105303772e883a0753cf4d6af6f33195eff49cccbcf1bdf7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1706647202959064062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a114725bb58f16fe05b4
0766dfd675a2,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901,PodSandboxId:9f5658af0abd0f8fa497b88b01ec774c36f40a91077d023cdeb081102e38d3c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706647202816601449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c77b1d2fc69e7744c0b3663b58046a,},Annotations:map[string]string{io.kub
ernetes.container.hash: d2a09030,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f,PodSandboxId:29e36a5e06e206529459dcca763c0e35d28f1f59e0ae964c029b4b3e41299293,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706647202673676774,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b849b15baa44349c67e242be9c74523,},Annotations
:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e,PodSandboxId:aac3d267dd8222b3a9325c82f372f6ca00aa95918efdb03fd4e186bc1a0317ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706647202353223356,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0488c96715580f546d9b840aeeef0809,},Annotations:map[string
]string{io.kubernetes.container.hash: 91e80384,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=07db129d-8acc-43be-a877-edfc804e7696 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.384413553Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7dbd039d-5ce1-4dc0-ba34-f5fc55f3181d name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.384887793Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a5442d98c12249d2769c781d5742a3a9c767e3c98ce51408824252f8aeba62d3,Metadata:&PodSandboxMetadata{Name:busybox,Uid:76483155-3957-4487-a0a8-7c5511ea5fe4,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647216306325212,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76483155-3957-4487-a0a8-7c5511ea5fe4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T20:40:08.285341260Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2017da92eac5d4134322e2004f54a0fdd411e91da80cdb0b8389fdb2939bf97f,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-d4c7t,Uid:a8701b4d-0616-4c05-9ba0-0157adae2d13,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17066472160188121
03,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-d4c7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8701b4d-0616-4c05-9ba0-0157adae2d13,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T20:40:08.285342526Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:971775f45d0d26b258a09bc53da3792d1d7204845d889170c1d1b4d3ab2fe028,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-wzb2g,Uid:cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647212420178088,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-wzb2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T20:40:08.2
85344814Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:14211c17a6df66ba5c4755ea6a1e75792b91164fb9fe5ca98be81375944c9f67,Metadata:&PodSandboxMetadata{Name:kube-proxy-zklzt,Uid:fa94d19c-b0d6-4e78-86e8-e6b5f3608753,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647208641813515,Labels:map[string]string{controller-revision-hash: 79c5f556d9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zklzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa94d19c-b0d6-4e78-86e8-e6b5f3608753,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T20:40:08.285334049Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d4e4e386d23faec07722b81d92baaa13efa13c229bfaffe1133538f6ecead0d1,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a257b079-cb6e-45fd-b05d-9ad6fa26225e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647208635180050,Labels:map[string]
string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a257b079-cb6e-45fd-b05d-9ad6fa26225e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io
/config.seen: 2024-01-30T20:40:08.285346035Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aac3d267dd8222b3a9325c82f372f6ca00aa95918efdb03fd4e186bc1a0317ae,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-473743,Uid:0488c96715580f546d9b840aeeef0809,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647201882343321,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0488c96715580f546d9b840aeeef0809,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.220:8443,kubernetes.io/config.hash: 0488c96715580f546d9b840aeeef0809,kubernetes.io/config.seen: 2024-01-30T20:40:01.287226444Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:29e36a5e06e206529459dcca763c0e35d28f1f59e0ae964c029b4b3e41299293,Metadata:&PodSandboxMetadata{N
ame:kube-controller-manager-no-preload-473743,Uid:9b849b15baa44349c67e242be9c74523,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647201863862159,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b849b15baa44349c67e242be9c74523,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9b849b15baa44349c67e242be9c74523,kubernetes.io/config.seen: 2024-01-30T20:40:01.287228064Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c59ff4568d34bc5d105303772e883a0753cf4d6af6f33195eff49cccbcf1bdf7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-473743,Uid:a114725bb58f16fe05b40766dfd675a2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647201859702392,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name:
kube-scheduler-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a114725bb58f16fe05b40766dfd675a2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a114725bb58f16fe05b40766dfd675a2,kubernetes.io/config.seen: 2024-01-30T20:40:01.287229245Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9f5658af0abd0f8fa497b88b01ec774c36f40a91077d023cdeb081102e38d3c6,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-473743,Uid:f8c77b1d2fc69e7744c0b3663b58046a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647201855739629,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c77b1d2fc69e7744c0b3663b58046a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.220:2379,kubernetes.io/config.hash: f8c77b1d2fc69e7744c0b3663b58046a,ku
bernetes.io/config.seen: 2024-01-30T20:40:01.287221177Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=7dbd039d-5ce1-4dc0-ba34-f5fc55f3181d name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.385963563Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6b427254-152b-437f-8c24-5f711371024b name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.386036122Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6b427254-152b-437f-8c24-5f711371024b name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.386279992Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0,PodSandboxId:d4e4e386d23faec07722b81d92baaa13efa13c229bfaffe1133538f6ecead0d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706647239580537453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a257b079-cb6e-45fd-b05d-9ad6fa26225e,},Annotations:map[string]string{io.kubernetes.container.hash: 206f44ea,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfced8166a62235ba8bd8f0e9ca8b9e9f0091b8c09a3c98cd911949b909a9c1,PodSandboxId:a5442d98c12249d2769c781d5742a3a9c767e3c98ce51408824252f8aeba62d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706647219937616575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76483155-3957-4487-a0a8-7c5511ea5fe4,},Annotations:map[string]string{io.kubernetes.container.hash: e27fea54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c,PodSandboxId:2017da92eac5d4134322e2004f54a0fdd411e91da80cdb0b8389fdb2939bf97f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706647216685352196,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d4c7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8701b4d-0616-4c05-9ba0-0157adae2d13,},Annotations:map[string]string{io.kubernetes.container.hash: 6e86a31,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689,PodSandboxId:14211c17a6df66ba5c4755ea6a1e75792b91164fb9fe5ca98be81375944c9f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706647209233271888,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zklzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa94d19c-b0
d6-4e78-86e8-e6b5f3608753,},Annotations:map[string]string{io.kubernetes.container.hash: bf8c81b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79,PodSandboxId:c59ff4568d34bc5d105303772e883a0753cf4d6af6f33195eff49cccbcf1bdf7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1706647202959064062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a114725bb58f16fe05
b40766dfd675a2,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901,PodSandboxId:9f5658af0abd0f8fa497b88b01ec774c36f40a91077d023cdeb081102e38d3c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706647202816601449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c77b1d2fc69e7744c0b3663b58046a,},Annotations:map[string]string{io.k
ubernetes.container.hash: d2a09030,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f,PodSandboxId:29e36a5e06e206529459dcca763c0e35d28f1f59e0ae964c029b4b3e41299293,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706647202673676774,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b849b15baa44349c67e242be9c74523,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e,PodSandboxId:aac3d267dd8222b3a9325c82f372f6ca00aa95918efdb03fd4e186bc1a0317ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706647202353223356,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0488c96715580f546d9b840aeeef0809,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 91e80384,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6b427254-152b-437f-8c24-5f711371024b name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.409013969Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=230e1933-ce62-4988-afe1-a09307f52635 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.409060683Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=230e1933-ce62-4988-afe1-a09307f52635 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.410338957Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=860f95bb-0e11-46c2-aeff-f6065613ddb8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.410701347Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648017410686669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=860f95bb-0e11-46c2-aeff-f6065613ddb8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.411330486Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=511706b4-a7d7-40bc-9ff3-96436b08eac0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.411373598Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=511706b4-a7d7-40bc-9ff3-96436b08eac0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.411551667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0,PodSandboxId:d4e4e386d23faec07722b81d92baaa13efa13c229bfaffe1133538f6ecead0d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706647239580537453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a257b079-cb6e-45fd-b05d-9ad6fa26225e,},Annotations:map[string]string{io.kubernetes.container.hash: 206f44ea,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfced8166a62235ba8bd8f0e9ca8b9e9f0091b8c09a3c98cd911949b909a9c1,PodSandboxId:a5442d98c12249d2769c781d5742a3a9c767e3c98ce51408824252f8aeba62d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706647219937616575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76483155-3957-4487-a0a8-7c5511ea5fe4,},Annotations:map[string]string{io.kubernetes.container.hash: e27fea54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c,PodSandboxId:2017da92eac5d4134322e2004f54a0fdd411e91da80cdb0b8389fdb2939bf97f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706647216685352196,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d4c7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8701b4d-0616-4c05-9ba0-0157adae2d13,},Annotations:map[string]string{io.kubernetes.container.hash: 6e86a31,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689,PodSandboxId:14211c17a6df66ba5c4755ea6a1e75792b91164fb9fe5ca98be81375944c9f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706647209233271888,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zklzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa94d19c-b0
d6-4e78-86e8-e6b5f3608753,},Annotations:map[string]string{io.kubernetes.container.hash: bf8c81b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446,PodSandboxId:d4e4e386d23faec07722b81d92baaa13efa13c229bfaffe1133538f6ecead0d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1706647209212984933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a257b079-cb6e
-45fd-b05d-9ad6fa26225e,},Annotations:map[string]string{io.kubernetes.container.hash: 206f44ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79,PodSandboxId:c59ff4568d34bc5d105303772e883a0753cf4d6af6f33195eff49cccbcf1bdf7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1706647202959064062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a114725bb58f16fe05b4
0766dfd675a2,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901,PodSandboxId:9f5658af0abd0f8fa497b88b01ec774c36f40a91077d023cdeb081102e38d3c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706647202816601449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c77b1d2fc69e7744c0b3663b58046a,},Annotations:map[string]string{io.kub
ernetes.container.hash: d2a09030,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f,PodSandboxId:29e36a5e06e206529459dcca763c0e35d28f1f59e0ae964c029b4b3e41299293,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706647202673676774,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b849b15baa44349c67e242be9c74523,},Annotations
:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e,PodSandboxId:aac3d267dd8222b3a9325c82f372f6ca00aa95918efdb03fd4e186bc1a0317ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706647202353223356,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0488c96715580f546d9b840aeeef0809,},Annotations:map[string
]string{io.kubernetes.container.hash: 91e80384,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=511706b4-a7d7-40bc-9ff3-96436b08eac0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.448659413Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=81b69b45-6e5b-40e3-8959-5158f15f2236 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.448710240Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=81b69b45-6e5b-40e3-8959-5158f15f2236 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.450625289Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0ae6af46-b517-4566-921e-6c5d1d9e8904 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.451192624Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648017451173555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=0ae6af46-b517-4566-921e-6c5d1d9e8904 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.451723025Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b6b20323-ead0-4444-b5db-2be267271a9a name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.451826851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b6b20323-ead0-4444-b5db-2be267271a9a name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:53:37 no-preload-473743 crio[718]: time="2024-01-30 20:53:37.452008180Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0,PodSandboxId:d4e4e386d23faec07722b81d92baaa13efa13c229bfaffe1133538f6ecead0d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706647239580537453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a257b079-cb6e-45fd-b05d-9ad6fa26225e,},Annotations:map[string]string{io.kubernetes.container.hash: 206f44ea,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfced8166a62235ba8bd8f0e9ca8b9e9f0091b8c09a3c98cd911949b909a9c1,PodSandboxId:a5442d98c12249d2769c781d5742a3a9c767e3c98ce51408824252f8aeba62d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706647219937616575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76483155-3957-4487-a0a8-7c5511ea5fe4,},Annotations:map[string]string{io.kubernetes.container.hash: e27fea54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c,PodSandboxId:2017da92eac5d4134322e2004f54a0fdd411e91da80cdb0b8389fdb2939bf97f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706647216685352196,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d4c7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8701b4d-0616-4c05-9ba0-0157adae2d13,},Annotations:map[string]string{io.kubernetes.container.hash: 6e86a31,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689,PodSandboxId:14211c17a6df66ba5c4755ea6a1e75792b91164fb9fe5ca98be81375944c9f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706647209233271888,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zklzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa94d19c-b0
d6-4e78-86e8-e6b5f3608753,},Annotations:map[string]string{io.kubernetes.container.hash: bf8c81b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446,PodSandboxId:d4e4e386d23faec07722b81d92baaa13efa13c229bfaffe1133538f6ecead0d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1706647209212984933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a257b079-cb6e
-45fd-b05d-9ad6fa26225e,},Annotations:map[string]string{io.kubernetes.container.hash: 206f44ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79,PodSandboxId:c59ff4568d34bc5d105303772e883a0753cf4d6af6f33195eff49cccbcf1bdf7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1706647202959064062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a114725bb58f16fe05b4
0766dfd675a2,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901,PodSandboxId:9f5658af0abd0f8fa497b88b01ec774c36f40a91077d023cdeb081102e38d3c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706647202816601449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c77b1d2fc69e7744c0b3663b58046a,},Annotations:map[string]string{io.kub
ernetes.container.hash: d2a09030,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f,PodSandboxId:29e36a5e06e206529459dcca763c0e35d28f1f59e0ae964c029b4b3e41299293,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706647202673676774,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b849b15baa44349c67e242be9c74523,},Annotations
:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e,PodSandboxId:aac3d267dd8222b3a9325c82f372f6ca00aa95918efdb03fd4e186bc1a0317ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706647202353223356,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0488c96715580f546d9b840aeeef0809,},Annotations:map[string
]string{io.kubernetes.container.hash: 91e80384,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b6b20323-ead0-4444-b5db-2be267271a9a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e690d53fe9ae6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   d4e4e386d23fa       storage-provisioner
	bdfced8166a62       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   a5442d98c1224       busybox
	3d08fb7c4f0e5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   2017da92eac5d       coredns-76f75df574-d4c7t
	880f1c6b663c7       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      13 minutes ago      Running             kube-proxy                1                   14211c17a6df6       kube-proxy-zklzt
	748483279e2b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   d4e4e386d23fa       storage-provisioner
	39917caad7f3b       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      13 minutes ago      Running             kube-scheduler            1                   c59ff4568d34b       kube-scheduler-no-preload-473743
	b6d8d2bbf972c       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      13 minutes ago      Running             etcd                      1                   9f5658af0abd0       etcd-no-preload-473743
	10fb0450f95ed       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      13 minutes ago      Running             kube-controller-manager   1                   29e36a5e06e20       kube-controller-manager-no-preload-473743
	ac5dbd0849de6       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      13 minutes ago      Running             kube-apiserver            1                   aac3d267dd822       kube-apiserver-no-preload-473743
	
	
	==> coredns [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52340 - 62464 "HINFO IN 288902453189497013.5229750491074800888. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014583491s
	
	
	==> describe nodes <==
	Name:               no-preload-473743
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-473743
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218
	                    minikube.k8s.io/name=no-preload-473743
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T20_29_44_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 20:29:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-473743
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 20:53:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 20:50:51 +0000   Tue, 30 Jan 2024 20:29:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 20:50:51 +0000   Tue, 30 Jan 2024 20:29:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 20:50:51 +0000   Tue, 30 Jan 2024 20:29:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 20:50:51 +0000   Tue, 30 Jan 2024 20:40:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.220
	  Hostname:    no-preload-473743
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 9a382c357ad5489ab98c79e836d3de29
	  System UUID:                9a382c35-7ad5-489a-b98c-79e836d3de29
	  Boot ID:                    708ff03a-910b-4ccf-ad1e-a0814598f511
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-76f75df574-d4c7t                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	  kube-system                 etcd-no-preload-473743                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kube-apiserver-no-preload-473743             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-no-preload-473743    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-zklzt                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-no-preload-473743             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 metrics-server-57f55c9bc5-wzb2g              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node no-preload-473743 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node no-preload-473743 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node no-preload-473743 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                23m                kubelet          Node no-preload-473743 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node no-preload-473743 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node no-preload-473743 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                kubelet          Node no-preload-473743 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           23m                node-controller  Node no-preload-473743 event: Registered Node no-preload-473743 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-473743 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-473743 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-473743 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-473743 event: Registered Node no-preload-473743 in Controller
	
	
	==> dmesg <==
	[Jan30 20:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071657] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779061] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.501980] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.158670] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.779353] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.198430] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.116740] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.137142] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.099664] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.216915] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[Jan30 20:40] systemd-fstab-generator[1328]: Ignoring "noauto" for root device
	[ +15.101318] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901] <==
	{"level":"info","ts":"2024-01-30T20:40:04.799458Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cc509ba192cc331e","local-member-id":"685a0398c95469a9","added-peer-id":"685a0398c95469a9","added-peer-peer-urls":["https://192.168.50.220:2380"]}
	{"level":"info","ts":"2024-01-30T20:40:04.799592Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cc509ba192cc331e","local-member-id":"685a0398c95469a9","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T20:40:04.799661Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T20:40:04.800344Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-30T20:40:04.800686Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"685a0398c95469a9","initial-advertise-peer-urls":["https://192.168.50.220:2380"],"listen-peer-urls":["https://192.168.50.220:2380"],"advertise-client-urls":["https://192.168.50.220:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.220:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-30T20:40:04.800853Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-30T20:40:04.800982Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.220:2380"}
	{"level":"info","ts":"2024-01-30T20:40:04.801007Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.220:2380"}
	{"level":"info","ts":"2024-01-30T20:40:06.484214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"685a0398c95469a9 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-30T20:40:06.484383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"685a0398c95469a9 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-30T20:40:06.484507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"685a0398c95469a9 received MsgPreVoteResp from 685a0398c95469a9 at term 2"}
	{"level":"info","ts":"2024-01-30T20:40:06.484614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"685a0398c95469a9 became candidate at term 3"}
	{"level":"info","ts":"2024-01-30T20:40:06.484671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"685a0398c95469a9 received MsgVoteResp from 685a0398c95469a9 at term 3"}
	{"level":"info","ts":"2024-01-30T20:40:06.484705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"685a0398c95469a9 became leader at term 3"}
	{"level":"info","ts":"2024-01-30T20:40:06.48491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 685a0398c95469a9 elected leader 685a0398c95469a9 at term 3"}
	{"level":"info","ts":"2024-01-30T20:40:06.486678Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"685a0398c95469a9","local-member-attributes":"{Name:no-preload-473743 ClientURLs:[https://192.168.50.220:2379]}","request-path":"/0/members/685a0398c95469a9/attributes","cluster-id":"cc509ba192cc331e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-30T20:40:06.486753Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T20:40:06.486889Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T20:40:06.487013Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-30T20:40:06.487428Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-30T20:40:06.48909Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.220:2379"}
	{"level":"info","ts":"2024-01-30T20:40:06.489221Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-30T20:50:06.523097Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":860}
	{"level":"info","ts":"2024-01-30T20:50:06.526516Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":860,"took":"2.765338ms","hash":1731629767}
	{"level":"info","ts":"2024-01-30T20:50:06.526597Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1731629767,"revision":860,"compact-revision":-1}
	
	
	==> kernel <==
	 20:53:37 up 14 min,  0 users,  load average: 0.05, 0.09, 0.08
	Linux no-preload-473743 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e] <==
	I0130 20:48:08.972680       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:50:07.972294       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:50:07.972431       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0130 20:50:08.972894       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:50:08.972990       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 20:50:08.973017       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:50:08.973077       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:50:08.973141       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:50:08.974317       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:51:08.973457       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:51:08.973593       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 20:51:08.973628       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:51:08.974606       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:51:08.974706       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:51:08.974740       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:53:08.974068       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:53:08.974149       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 20:53:08.974158       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:53:08.975212       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:53:08.975321       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:53:08.975363       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f] <==
	I0130 20:47:51.422693       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:48:20.943300       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:48:21.430370       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:48:50.948487       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:48:51.438957       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:49:20.953111       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:49:21.453336       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:49:50.961331       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:49:51.462559       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:50:20.968733       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:50:21.470597       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:50:50.975723       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:50:51.479883       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0130 20:51:15.404159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="261.304µs"
	E0130 20:51:20.981961       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:51:21.488066       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0130 20:51:27.404484       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="106.117µs"
	E0130 20:51:50.989413       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:51:51.498398       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:52:20.995048       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:52:21.508695       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:52:51.000194       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:52:51.517668       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:53:21.006316       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:53:21.526478       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689] <==
	I0130 20:40:09.561642       1 server_others.go:72] "Using iptables proxy"
	I0130 20:40:09.584008       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.220"]
	I0130 20:40:09.713026       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0130 20:40:09.713083       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0130 20:40:09.713099       1 server_others.go:168] "Using iptables Proxier"
	I0130 20:40:09.717048       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0130 20:40:09.717241       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0130 20:40:09.717280       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 20:40:09.722953       1 config.go:315] "Starting node config controller"
	I0130 20:40:09.722993       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0130 20:40:09.723349       1 config.go:188] "Starting service config controller"
	I0130 20:40:09.723356       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0130 20:40:09.723374       1 config.go:97] "Starting endpoint slice config controller"
	I0130 20:40:09.723377       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0130 20:40:09.823620       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0130 20:40:09.823750       1 shared_informer.go:318] Caches are synced for service config
	I0130 20:40:09.823754       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79] <==
	I0130 20:40:05.244107       1 serving.go:380] Generated self-signed cert in-memory
	W0130 20:40:07.881976       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0130 20:40:07.882129       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 20:40:07.882162       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0130 20:40:07.882268       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0130 20:40:07.975394       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0130 20:40:07.975534       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 20:40:07.987553       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0130 20:40:07.987613       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0130 20:40:07.988046       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0130 20:40:07.988122       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0130 20:40:08.088882       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 20:39:19 UTC, ends at Tue 2024-01-30 20:53:38 UTC. --
	Jan 30 20:51:01 no-preload-473743 kubelet[1334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 20:51:04 no-preload-473743 kubelet[1334]: E0130 20:51:04.396139    1334 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 30 20:51:04 no-preload-473743 kubelet[1334]: E0130 20:51:04.396228    1334 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 30 20:51:04 no-preload-473743 kubelet[1334]: E0130 20:51:04.396473    1334 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-b8492,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-wzb2g_kube-system(cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 30 20:51:04 no-preload-473743 kubelet[1334]: E0130 20:51:04.396599    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:51:15 no-preload-473743 kubelet[1334]: E0130 20:51:15.386124    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:51:27 no-preload-473743 kubelet[1334]: E0130 20:51:27.386684    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:51:40 no-preload-473743 kubelet[1334]: E0130 20:51:40.385492    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:51:51 no-preload-473743 kubelet[1334]: E0130 20:51:51.387481    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:52:01 no-preload-473743 kubelet[1334]: E0130 20:52:01.509677    1334 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 20:52:01 no-preload-473743 kubelet[1334]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 20:52:01 no-preload-473743 kubelet[1334]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:52:01 no-preload-473743 kubelet[1334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 20:52:02 no-preload-473743 kubelet[1334]: E0130 20:52:02.384902    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:52:16 no-preload-473743 kubelet[1334]: E0130 20:52:16.384859    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:52:29 no-preload-473743 kubelet[1334]: E0130 20:52:29.386849    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:52:42 no-preload-473743 kubelet[1334]: E0130 20:52:42.384821    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:52:53 no-preload-473743 kubelet[1334]: E0130 20:52:53.386833    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:53:01 no-preload-473743 kubelet[1334]: E0130 20:53:01.508056    1334 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 20:53:01 no-preload-473743 kubelet[1334]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 20:53:01 no-preload-473743 kubelet[1334]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:53:01 no-preload-473743 kubelet[1334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 20:53:04 no-preload-473743 kubelet[1334]: E0130 20:53:04.384941    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:53:17 no-preload-473743 kubelet[1334]: E0130 20:53:17.384536    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:53:31 no-preload-473743 kubelet[1334]: E0130 20:53:31.385416    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	
	
	==> storage-provisioner [748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446] <==
	I0130 20:40:09.484963       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0130 20:40:39.500627       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0] <==
	I0130 20:40:39.698207       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 20:40:39.707654       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 20:40:39.707737       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 20:40:57.112638       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 20:40:57.112859       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-473743_f3fb3cca-9c04-49f9-ad5d-0674c5b889ec!
	I0130 20:40:57.112948       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"726cd493-9a17-4202-977a-c6967814510c", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-473743_f3fb3cca-9c04-49f9-ad5d-0674c5b889ec became leader
	I0130 20:40:57.213930       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-473743_f3fb3cca-9c04-49f9-ad5d-0674c5b889ec!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-473743 -n no-preload-473743
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-473743 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-wzb2g
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-473743 describe pod metrics-server-57f55c9bc5-wzb2g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-473743 describe pod metrics-server-57f55c9bc5-wzb2g: exit status 1 (66.366966ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-wzb2g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-473743 describe pod metrics-server-57f55c9bc5-wzb2g: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0130 20:46:31.182098   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 20:48:07.771719   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-150971 -n old-k8s-version-150971
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-30 20:55:24.259163843 +0000 UTC m=+5561.336133615
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-150971 -n old-k8s-version-150971
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-150971 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-150971 logs -n 25: (1.631971464s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:28 UTC | 30 Jan 24 20:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:28 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| pause   | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-757744 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | disable-driver-mounts-757744                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:31 UTC |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-473743             | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-473743                                   | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-208583            | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:31 UTC | 30 Jan 24 20:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:31 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-877742  | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:32 UTC | 30 Jan 24 20:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:32 UTC |                     |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-473743                  | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-208583                 | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-473743                                   | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:44 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-150971        | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-877742       | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:34 UTC | 30 Jan 24 20:48 UTC |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-150971             | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:36 UTC | 30 Jan 24 20:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 20:36:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 20:36:09.643751   45819 out.go:296] Setting OutFile to fd 1 ...
	I0130 20:36:09.644027   45819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:36:09.644038   45819 out.go:309] Setting ErrFile to fd 2...
	I0130 20:36:09.644045   45819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:36:09.644230   45819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 20:36:09.644766   45819 out.go:303] Setting JSON to false
	I0130 20:36:09.645668   45819 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4715,"bootTime":1706642255,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 20:36:09.645727   45819 start.go:138] virtualization: kvm guest
	I0130 20:36:09.648102   45819 out.go:177] * [old-k8s-version-150971] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 20:36:09.649772   45819 out.go:177]   - MINIKUBE_LOCATION=18007
	I0130 20:36:09.651000   45819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 20:36:09.649826   45819 notify.go:220] Checking for updates...
	I0130 20:36:09.653462   45819 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:36:09.654761   45819 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 20:36:09.655939   45819 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 20:36:09.657140   45819 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 20:36:09.658638   45819 config.go:182] Loaded profile config "old-k8s-version-150971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 20:36:09.659027   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:36:09.659066   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:36:09.672985   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39323
	I0130 20:36:09.673381   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:36:09.673876   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:36:09.673897   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:36:09.674191   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:36:09.674351   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:36:09.676038   45819 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0130 20:36:09.677315   45819 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 20:36:09.677582   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:36:09.677630   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:36:09.691259   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0130 20:36:09.691604   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:36:09.692060   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:36:09.692089   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:36:09.692371   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:36:09.692555   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:36:09.726172   45819 out.go:177] * Using the kvm2 driver based on existing profile
	I0130 20:36:09.727421   45819 start.go:298] selected driver: kvm2
	I0130 20:36:09.727433   45819 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-150971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:36:09.727546   45819 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 20:36:09.728186   45819 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 20:36:09.728255   45819 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18007-4458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 20:36:09.742395   45819 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 20:36:09.742715   45819 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 20:36:09.742771   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:36:09.742784   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:36:09.742794   45819 start_flags.go:321] config:
	{Name:old-k8s-version-150971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:36:09.742977   45819 iso.go:125] acquiring lock: {Name:mk072ab123730f3058e85a91672f85e887bd47af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 20:36:09.745577   45819 out.go:177] * Starting control plane node old-k8s-version-150971 in cluster old-k8s-version-150971
	I0130 20:36:10.483495   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:09.746820   45819 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 20:36:09.746852   45819 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0130 20:36:09.746865   45819 cache.go:56] Caching tarball of preloaded images
	I0130 20:36:09.746951   45819 preload.go:174] Found /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 20:36:09.746960   45819 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0130 20:36:09.747061   45819 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/config.json ...
	I0130 20:36:09.747229   45819 start.go:365] acquiring machines lock for old-k8s-version-150971: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 20:36:13.555547   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:19.635533   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:22.707498   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:28.787473   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:31.859544   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:37.939524   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:41.011456   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:47.091510   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:50.163505   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:56.243497   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:59.315474   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:05.395536   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:08.467514   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:14.547517   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:17.619561   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:23.699509   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:26.771568   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:32.851483   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:35.923502   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:42.003515   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:45.075526   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:51.155512   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:54.227514   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:38:00.307532   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:38:03.311451   45037 start.go:369] acquired machines lock for "embed-certs-208583" in 4m29.471089592s
	I0130 20:38:03.311507   45037 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:38:03.311514   45037 fix.go:54] fixHost starting: 
	I0130 20:38:03.311893   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:03.311933   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:03.326477   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0130 20:38:03.326949   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:03.327373   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:03.327403   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:03.327758   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:03.327946   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:03.328115   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:03.329604   45037 fix.go:102] recreateIfNeeded on embed-certs-208583: state=Stopped err=<nil>
	I0130 20:38:03.329646   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	W0130 20:38:03.329810   45037 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:38:03.331493   45037 out.go:177] * Restarting existing kvm2 VM for "embed-certs-208583" ...
	I0130 20:38:03.332735   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Start
	I0130 20:38:03.332862   45037 main.go:141] libmachine: (embed-certs-208583) Ensuring networks are active...
	I0130 20:38:03.333514   45037 main.go:141] libmachine: (embed-certs-208583) Ensuring network default is active
	I0130 20:38:03.333859   45037 main.go:141] libmachine: (embed-certs-208583) Ensuring network mk-embed-certs-208583 is active
	I0130 20:38:03.334154   45037 main.go:141] libmachine: (embed-certs-208583) Getting domain xml...
	I0130 20:38:03.334860   45037 main.go:141] libmachine: (embed-certs-208583) Creating domain...
	I0130 20:38:03.309254   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:38:03.309293   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:38:03.311318   44923 machine.go:91] provisioned docker machine in 4m37.382925036s
	I0130 20:38:03.311359   44923 fix.go:56] fixHost completed within 4m37.403399512s
	I0130 20:38:03.311364   44923 start.go:83] releasing machines lock for "no-preload-473743", held for 4m37.403435936s
	W0130 20:38:03.311387   44923 start.go:694] error starting host: provision: host is not running
	W0130 20:38:03.311504   44923 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0130 20:38:03.311518   44923 start.go:709] Will try again in 5 seconds ...
	I0130 20:38:04.507963   45037 main.go:141] libmachine: (embed-certs-208583) Waiting to get IP...
	I0130 20:38:04.508755   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:04.509133   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:04.509207   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:04.509115   46132 retry.go:31] will retry after 189.527185ms: waiting for machine to come up
	I0130 20:38:04.700560   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:04.701193   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:04.701223   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:04.701137   46132 retry.go:31] will retry after 239.29825ms: waiting for machine to come up
	I0130 20:38:04.941612   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:04.942080   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:04.942116   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:04.942040   46132 retry.go:31] will retry after 388.672579ms: waiting for machine to come up
	I0130 20:38:05.332617   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:05.333018   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:05.333041   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:05.332968   46132 retry.go:31] will retry after 525.5543ms: waiting for machine to come up
	I0130 20:38:05.859677   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:05.860094   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:05.860126   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:05.860055   46132 retry.go:31] will retry after 595.87535ms: waiting for machine to come up
	I0130 20:38:06.457828   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:06.458220   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:06.458244   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:06.458197   46132 retry.go:31] will retry after 766.148522ms: waiting for machine to come up
	I0130 20:38:07.226151   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:07.226615   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:07.226652   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:07.226558   46132 retry.go:31] will retry after 843.449223ms: waiting for machine to come up
	I0130 20:38:08.070983   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:08.071381   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:08.071407   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:08.071338   46132 retry.go:31] will retry after 1.079839146s: waiting for machine to come up
	I0130 20:38:08.313897   44923 start.go:365] acquiring machines lock for no-preload-473743: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 20:38:09.152768   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:09.153087   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:09.153113   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:09.153034   46132 retry.go:31] will retry after 1.855245571s: waiting for machine to come up
	I0130 20:38:11.010893   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:11.011260   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:11.011299   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:11.011196   46132 retry.go:31] will retry after 2.159062372s: waiting for machine to come up
	I0130 20:38:13.172734   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:13.173144   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:13.173173   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:13.173106   46132 retry.go:31] will retry after 2.73165804s: waiting for machine to come up
	I0130 20:38:15.908382   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:15.908803   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:15.908834   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:15.908732   46132 retry.go:31] will retry after 3.268718285s: waiting for machine to come up
	I0130 20:38:23.603972   45441 start.go:369] acquired machines lock for "default-k8s-diff-port-877742" in 3m48.064811183s
	I0130 20:38:23.604051   45441 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:38:23.604061   45441 fix.go:54] fixHost starting: 
	I0130 20:38:23.604420   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:23.604456   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:23.620189   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34493
	I0130 20:38:23.620538   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:23.621035   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:38:23.621073   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:23.621415   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:23.621584   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:23.621739   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:38:23.623158   45441 fix.go:102] recreateIfNeeded on default-k8s-diff-port-877742: state=Stopped err=<nil>
	I0130 20:38:23.623185   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	W0130 20:38:23.623382   45441 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:38:23.625974   45441 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-877742" ...
	I0130 20:38:19.178930   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:19.179358   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:19.179389   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:19.179300   46132 retry.go:31] will retry after 3.117969425s: waiting for machine to come up
	I0130 20:38:22.300539   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.300957   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has current primary IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.300982   45037 main.go:141] libmachine: (embed-certs-208583) Found IP for machine: 192.168.61.63
	I0130 20:38:22.300997   45037 main.go:141] libmachine: (embed-certs-208583) Reserving static IP address...
	I0130 20:38:22.301371   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "embed-certs-208583", mac: "52:54:00:43:f2:e1", ip: "192.168.61.63"} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.301395   45037 main.go:141] libmachine: (embed-certs-208583) Reserved static IP address: 192.168.61.63
	I0130 20:38:22.301409   45037 main.go:141] libmachine: (embed-certs-208583) DBG | skip adding static IP to network mk-embed-certs-208583 - found existing host DHCP lease matching {name: "embed-certs-208583", mac: "52:54:00:43:f2:e1", ip: "192.168.61.63"}
	I0130 20:38:22.301420   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Getting to WaitForSSH function...
	I0130 20:38:22.301436   45037 main.go:141] libmachine: (embed-certs-208583) Waiting for SSH to be available...
	I0130 20:38:22.303472   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.303820   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.303842   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.303968   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Using SSH client type: external
	I0130 20:38:22.304011   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa (-rw-------)
	I0130 20:38:22.304042   45037 main.go:141] libmachine: (embed-certs-208583) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:38:22.304052   45037 main.go:141] libmachine: (embed-certs-208583) DBG | About to run SSH command:
	I0130 20:38:22.304065   45037 main.go:141] libmachine: (embed-certs-208583) DBG | exit 0
	I0130 20:38:22.398610   45037 main.go:141] libmachine: (embed-certs-208583) DBG | SSH cmd err, output: <nil>: 
	I0130 20:38:22.398945   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetConfigRaw
	I0130 20:38:22.399605   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:22.402157   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.402531   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.402569   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.402759   45037 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/config.json ...
	I0130 20:38:22.402974   45037 machine.go:88] provisioning docker machine ...
	I0130 20:38:22.402999   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:22.403238   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetMachineName
	I0130 20:38:22.403440   45037 buildroot.go:166] provisioning hostname "embed-certs-208583"
	I0130 20:38:22.403462   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetMachineName
	I0130 20:38:22.403642   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.405694   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.406026   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.406055   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.406180   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:22.406429   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.406599   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.406734   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:22.406904   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:22.407422   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:22.407446   45037 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208583 && echo "embed-certs-208583" | sudo tee /etc/hostname
	I0130 20:38:22.548206   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208583
	
	I0130 20:38:22.548240   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.550933   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.551316   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.551345   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.551492   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:22.551690   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.551821   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.551934   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:22.552129   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:22.552425   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:22.552441   45037 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208583' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208583/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208583' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:38:22.687464   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:38:22.687491   45037 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:38:22.687536   45037 buildroot.go:174] setting up certificates
	I0130 20:38:22.687551   45037 provision.go:83] configureAuth start
	I0130 20:38:22.687562   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetMachineName
	I0130 20:38:22.687813   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:22.690307   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.690664   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.690686   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.690855   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.693139   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.693426   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.693462   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.693597   45037 provision.go:138] copyHostCerts
	I0130 20:38:22.693667   45037 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:38:22.693686   45037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:38:22.693766   45037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:38:22.693866   45037 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:38:22.693876   45037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:38:22.693912   45037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:38:22.693986   45037 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:38:22.693997   45037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:38:22.694036   45037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:38:22.694122   45037 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208583 san=[192.168.61.63 192.168.61.63 localhost 127.0.0.1 minikube embed-certs-208583]
	I0130 20:38:22.862847   45037 provision.go:172] copyRemoteCerts
	I0130 20:38:22.862902   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:38:22.862921   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.865533   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.865812   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.865842   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.866006   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:22.866200   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.866315   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:22.866496   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:22.959746   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:38:22.982164   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 20:38:23.004087   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 20:38:23.025875   45037 provision.go:86] duration metric: configureAuth took 338.306374ms
	I0130 20:38:23.025896   45037 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:38:23.026090   45037 config.go:182] Loaded profile config "embed-certs-208583": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:38:23.026173   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.028688   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.028913   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.028946   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.029125   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.029277   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.029430   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.029550   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.029679   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:23.029980   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:23.029995   45037 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:38:23.337986   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:38:23.338008   45037 machine.go:91] provisioned docker machine in 935.018208ms
	I0130 20:38:23.338016   45037 start.go:300] post-start starting for "embed-certs-208583" (driver="kvm2")
	I0130 20:38:23.338026   45037 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:38:23.338051   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.338301   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:38:23.338327   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.341005   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.341398   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.341429   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.341516   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.341686   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.341825   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.341997   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:23.437500   45037 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:38:23.441705   45037 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:38:23.441724   45037 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:38:23.441784   45037 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:38:23.441851   45037 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:38:23.441937   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:38:23.450700   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:23.471898   45037 start.go:303] post-start completed in 133.870929ms
	I0130 20:38:23.471916   45037 fix.go:56] fixHost completed within 20.160401625s
	I0130 20:38:23.471940   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.474341   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.474659   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.474695   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.474793   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.474984   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.475181   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.475341   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.475515   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:23.475878   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:23.475891   45037 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:38:23.603819   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647103.552984334
	
	I0130 20:38:23.603841   45037 fix.go:206] guest clock: 1706647103.552984334
	I0130 20:38:23.603848   45037 fix.go:219] Guest: 2024-01-30 20:38:23.552984334 +0000 UTC Remote: 2024-01-30 20:38:23.471920461 +0000 UTC m=+289.780929635 (delta=81.063873ms)
	I0130 20:38:23.603879   45037 fix.go:190] guest clock delta is within tolerance: 81.063873ms
	I0130 20:38:23.603885   45037 start.go:83] releasing machines lock for "embed-certs-208583", held for 20.292396099s
	I0130 20:38:23.603916   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.604168   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:23.606681   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.607027   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.607060   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.607190   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.607703   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.607876   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.607947   45037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:38:23.607999   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.608115   45037 ssh_runner.go:195] Run: cat /version.json
	I0130 20:38:23.608140   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.610693   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611052   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.611078   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611154   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611199   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.611380   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.611530   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.611585   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.611625   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611666   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:23.611790   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.611935   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.612081   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.612197   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:23.725868   45037 ssh_runner.go:195] Run: systemctl --version
	I0130 20:38:23.731516   45037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:38:23.872093   45037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:38:23.878418   45037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:38:23.878493   45037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:38:23.892910   45037 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:38:23.892934   45037 start.go:475] detecting cgroup driver to use...
	I0130 20:38:23.893007   45037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:38:23.905950   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:38:23.917437   45037 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:38:23.917484   45037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:38:23.929241   45037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:38:23.940979   45037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:38:24.045106   45037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:38:24.160413   45037 docker.go:233] disabling docker service ...
	I0130 20:38:24.160486   45037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:38:24.173684   45037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:38:24.185484   45037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:38:24.308292   45037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:38:24.430021   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:38:24.442910   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:38:24.460145   45037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:38:24.460211   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.469163   45037 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:38:24.469225   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.478396   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.487374   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.496306   45037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:38:24.505283   45037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:38:24.512919   45037 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:38:24.512974   45037 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:38:24.523939   45037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:38:24.533002   45037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:38:24.665917   45037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:38:24.839797   45037 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:38:24.839866   45037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:38:24.851397   45037 start.go:543] Will wait 60s for crictl version
	I0130 20:38:24.851454   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:38:24.855227   45037 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:38:24.888083   45037 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:38:24.888163   45037 ssh_runner.go:195] Run: crio --version
	I0130 20:38:24.934626   45037 ssh_runner.go:195] Run: crio --version
	I0130 20:38:24.984233   45037 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 20:38:23.627365   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Start
	I0130 20:38:23.627532   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Ensuring networks are active...
	I0130 20:38:23.628247   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Ensuring network default is active
	I0130 20:38:23.628650   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Ensuring network mk-default-k8s-diff-port-877742 is active
	I0130 20:38:23.629109   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Getting domain xml...
	I0130 20:38:23.629715   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Creating domain...
	I0130 20:38:24.849156   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting to get IP...
	I0130 20:38:24.850261   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:24.850701   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:24.850729   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:24.850645   46249 retry.go:31] will retry after 259.328149ms: waiting for machine to come up
	I0130 20:38:25.112451   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.112941   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.112971   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:25.112905   46249 retry.go:31] will retry after 283.994822ms: waiting for machine to come up
	I0130 20:38:25.398452   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.398937   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.398968   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:25.398904   46249 retry.go:31] will retry after 348.958329ms: waiting for machine to come up
	I0130 20:38:24.985681   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:24.988666   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:24.989016   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:24.989042   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:24.989288   45037 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0130 20:38:24.993626   45037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:25.005749   45037 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 20:38:25.005817   45037 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:25.047605   45037 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 20:38:25.047674   45037 ssh_runner.go:195] Run: which lz4
	I0130 20:38:25.051662   45037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 20:38:25.055817   45037 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:38:25.055849   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 20:38:26.895244   45037 crio.go:444] Took 1.843605 seconds to copy over tarball
	I0130 20:38:26.895332   45037 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 20:38:25.749560   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.750020   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.750048   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:25.749985   46249 retry.go:31] will retry after 597.656366ms: waiting for machine to come up
	I0130 20:38:26.349518   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.349957   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.350004   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:26.349929   46249 retry.go:31] will retry after 600.926171ms: waiting for machine to come up
	I0130 20:38:26.952713   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.953319   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.953343   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:26.953276   46249 retry.go:31] will retry after 654.976543ms: waiting for machine to come up
	I0130 20:38:27.610017   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:27.610464   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:27.610494   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:27.610413   46249 retry.go:31] will retry after 881.075627ms: waiting for machine to come up
	I0130 20:38:28.493641   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:28.494188   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:28.494218   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:28.494136   46249 retry.go:31] will retry after 1.436302447s: waiting for machine to come up
	I0130 20:38:29.932271   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:29.932794   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:29.932825   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:29.932729   46249 retry.go:31] will retry after 1.394659615s: waiting for machine to come up
	I0130 20:38:29.834721   45037 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.939351369s)
	I0130 20:38:29.834746   45037 crio.go:451] Took 2.939470 seconds to extract the tarball
	I0130 20:38:29.834754   45037 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 20:38:29.875618   45037 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:29.921569   45037 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 20:38:29.921593   45037 cache_images.go:84] Images are preloaded, skipping loading
	I0130 20:38:29.921661   45037 ssh_runner.go:195] Run: crio config
	I0130 20:38:29.981565   45037 cni.go:84] Creating CNI manager for ""
	I0130 20:38:29.981590   45037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:38:29.981612   45037 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:38:29.981637   45037 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.63 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-208583 NodeName:embed-certs-208583 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:38:29.981824   45037 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-208583"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:38:29.981919   45037 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-208583 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-208583 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:38:29.981984   45037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 20:38:29.991601   45037 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:38:29.991665   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:38:30.000815   45037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0130 20:38:30.016616   45037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:38:30.032999   45037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0130 20:38:30.052735   45037 ssh_runner.go:195] Run: grep 192.168.61.63	control-plane.minikube.internal$ /etc/hosts
	I0130 20:38:30.057008   45037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:30.069968   45037 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583 for IP: 192.168.61.63
	I0130 20:38:30.070004   45037 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:30.070164   45037 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:38:30.070201   45037 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:38:30.070263   45037 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/client.key
	I0130 20:38:30.070323   45037 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/apiserver.key.9879da99
	I0130 20:38:30.070370   45037 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/proxy-client.key
	I0130 20:38:30.070496   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:38:30.070531   45037 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:38:30.070541   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:38:30.070561   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:38:30.070586   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:38:30.070612   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:38:30.070659   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:30.071211   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:38:30.098665   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 20:38:30.125013   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:38:30.150013   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 20:38:30.177206   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:38:30.202683   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:38:30.225774   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:38:30.249090   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:38:30.274681   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:38:30.302316   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:38:30.326602   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:38:30.351136   45037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:38:30.368709   45037 ssh_runner.go:195] Run: openssl version
	I0130 20:38:30.374606   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:38:30.386421   45037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:38:30.391240   45037 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:38:30.391314   45037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:38:30.397082   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:38:30.409040   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:38:30.420910   45037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:30.425929   45037 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:30.425971   45037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:30.431609   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:38:30.443527   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:38:30.455200   45037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:38:30.460242   45037 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:38:30.460307   45037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:38:30.466225   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:38:30.479406   45037 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:38:30.485331   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:38:30.493468   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:38:30.499465   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:38:30.505394   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:38:30.511152   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:38:30.516951   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:38:30.522596   45037 kubeadm.go:404] StartCluster: {Name:embed-certs-208583 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-208583 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.63 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:38:30.522698   45037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:38:30.522747   45037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:38:30.559669   45037 cri.go:89] found id: ""
	I0130 20:38:30.559740   45037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:38:30.571465   45037 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:38:30.571487   45037 kubeadm.go:636] restartCluster start
	I0130 20:38:30.571539   45037 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:38:30.581398   45037 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:30.582366   45037 kubeconfig.go:92] found "embed-certs-208583" server: "https://192.168.61.63:8443"
	I0130 20:38:30.584719   45037 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:38:30.593986   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:30.594031   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:30.606926   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:31.094476   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:31.094545   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:31.106991   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:31.594553   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:31.594633   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:31.607554   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:32.094029   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:32.094114   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:32.107447   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:32.594998   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:32.595079   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:32.607929   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:33.094468   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:33.094562   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:33.111525   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:33.594502   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:33.594578   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:33.611216   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:31.329366   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:31.329720   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:31.329739   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:31.329672   46249 retry.go:31] will retry after 1.8606556s: waiting for machine to come up
	I0130 20:38:33.192538   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:33.192916   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:33.192938   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:33.192873   46249 retry.go:31] will retry after 2.294307307s: waiting for machine to come up
	I0130 20:38:34.094151   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:34.094223   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:34.106531   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:34.594098   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:34.594172   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:34.606286   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:35.094891   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:35.094995   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:35.106949   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:35.594452   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:35.594532   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:35.611066   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:36.094606   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:36.094684   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:36.110348   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:36.595021   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:36.595084   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:36.609884   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:37.094347   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:37.094445   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:37.106709   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:37.594248   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:37.594348   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:37.610367   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:38.095063   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:38.095141   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:38.107195   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:38.594024   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:38.594139   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:38.606041   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:35.489701   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:35.490129   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:35.490166   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:35.490071   46249 retry.go:31] will retry after 2.434575636s: waiting for machine to come up
	I0130 20:38:37.927709   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:37.928168   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:37.928198   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:37.928111   46249 retry.go:31] will retry after 3.073200884s: waiting for machine to come up
	I0130 20:38:39.094490   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:39.094572   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:39.106154   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:39.594866   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:39.594961   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:39.606937   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:40.094464   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:40.094549   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:40.106068   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:40.594556   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:40.594637   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:40.606499   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:40.606523   45037 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:38:40.606544   45037 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:38:40.606554   45037 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:38:40.606605   45037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:38:40.646444   45037 cri.go:89] found id: ""
	I0130 20:38:40.646505   45037 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:38:40.661886   45037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:38:40.670948   45037 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:38:40.671008   45037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:38:40.679749   45037 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:38:40.679771   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:40.780597   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:41.804175   45037 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.023537725s)
	I0130 20:38:41.804214   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:41.999624   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:42.103064   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:42.173522   45037 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:38:42.173628   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:42.674417   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:43.173996   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:43.674137   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:41.004686   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:41.005140   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:41.005165   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:41.005085   46249 retry.go:31] will retry after 3.766414086s: waiting for machine to come up
	I0130 20:38:44.773568   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.774049   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Found IP for machine: 192.168.72.52
	I0130 20:38:44.774082   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has current primary IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.774099   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Reserving static IP address...
	I0130 20:38:44.774494   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-877742", mac: "52:54:00:c4:e0:0b", ip: "192.168.72.52"} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.774517   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Reserved static IP address: 192.168.72.52
	I0130 20:38:44.774543   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | skip adding static IP to network mk-default-k8s-diff-port-877742 - found existing host DHCP lease matching {name: "default-k8s-diff-port-877742", mac: "52:54:00:c4:e0:0b", ip: "192.168.72.52"}
	I0130 20:38:44.774561   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for SSH to be available...
	I0130 20:38:44.774589   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Getting to WaitForSSH function...
	I0130 20:38:44.776761   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.777079   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.777114   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.777210   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Using SSH client type: external
	I0130 20:38:44.777242   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa (-rw-------)
	I0130 20:38:44.777299   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:38:44.777332   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | About to run SSH command:
	I0130 20:38:44.777352   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | exit 0
	I0130 20:38:44.875219   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | SSH cmd err, output: <nil>: 
	I0130 20:38:44.875515   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetConfigRaw
	I0130 20:38:44.876243   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:44.878633   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.879035   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.879069   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.879336   45441 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/config.json ...
	I0130 20:38:44.879504   45441 machine.go:88] provisioning docker machine ...
	I0130 20:38:44.879522   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:44.879734   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetMachineName
	I0130 20:38:44.879889   45441 buildroot.go:166] provisioning hostname "default-k8s-diff-port-877742"
	I0130 20:38:44.879932   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetMachineName
	I0130 20:38:44.880102   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:44.882426   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.882753   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.882777   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.882927   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:44.883099   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:44.883246   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:44.883409   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:44.883569   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:44.884066   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:44.884092   45441 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-877742 && echo "default-k8s-diff-port-877742" | sudo tee /etc/hostname
	I0130 20:38:45.030801   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-877742
	
	I0130 20:38:45.030847   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.033532   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.033897   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.033955   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.034094   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.034309   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.034489   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.034644   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.034826   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:45.035168   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:45.035187   45441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-877742' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-877742/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-877742' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:38:45.175807   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:38:45.175849   45441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:38:45.175884   45441 buildroot.go:174] setting up certificates
	I0130 20:38:45.175907   45441 provision.go:83] configureAuth start
	I0130 20:38:45.175923   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetMachineName
	I0130 20:38:45.176200   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:45.179102   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.179489   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.179526   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.179664   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.182178   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.182532   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.182560   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.182666   45441 provision.go:138] copyHostCerts
	I0130 20:38:45.182716   45441 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:38:45.182728   45441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:38:45.182788   45441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:38:45.182895   45441 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:38:45.182910   45441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:38:45.182973   45441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:38:45.183054   45441 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:38:45.183065   45441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:38:45.183090   45441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:38:45.183158   45441 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-877742 san=[192.168.72.52 192.168.72.52 localhost 127.0.0.1 minikube default-k8s-diff-port-877742]
	I0130 20:38:45.352895   45441 provision.go:172] copyRemoteCerts
	I0130 20:38:45.352960   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:38:45.352986   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.355820   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.356141   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.356169   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.356343   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.356540   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.356717   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.356868   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:46.136084   45819 start.go:369] acquired machines lock for "old-k8s-version-150971" in 2m36.388823473s
	I0130 20:38:46.136157   45819 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:38:46.136169   45819 fix.go:54] fixHost starting: 
	I0130 20:38:46.136624   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:46.136669   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:46.153210   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33685
	I0130 20:38:46.153604   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:46.154080   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:38:46.154104   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:46.154422   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:46.154630   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:38:46.154771   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:38:46.156388   45819 fix.go:102] recreateIfNeeded on old-k8s-version-150971: state=Stopped err=<nil>
	I0130 20:38:46.156420   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	W0130 20:38:46.156613   45819 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:38:46.158388   45819 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-150971" ...
	I0130 20:38:45.456511   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:38:45.483324   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0130 20:38:45.510567   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 20:38:45.535387   45441 provision.go:86] duration metric: configureAuth took 359.467243ms
	I0130 20:38:45.535421   45441 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:38:45.535659   45441 config.go:182] Loaded profile config "default-k8s-diff-port-877742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:38:45.535749   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.538712   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.539176   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.539214   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.539334   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.539574   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.539741   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.539995   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.540244   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:45.540770   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:45.540796   45441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:38:45.877778   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:38:45.877813   45441 machine.go:91] provisioned docker machine in 998.294632ms
	I0130 20:38:45.877825   45441 start.go:300] post-start starting for "default-k8s-diff-port-877742" (driver="kvm2")
	I0130 20:38:45.877845   45441 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:38:45.877869   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:45.878190   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:38:45.878224   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.881167   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.881533   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.881566   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.881704   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.881880   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.882064   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.882207   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:45.972932   45441 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:38:45.977412   45441 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:38:45.977437   45441 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:38:45.977514   45441 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:38:45.977593   45441 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:38:45.977694   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:38:45.985843   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:46.008484   45441 start.go:303] post-start completed in 130.643321ms
	I0130 20:38:46.008509   45441 fix.go:56] fixHost completed within 22.404447995s
	I0130 20:38:46.008533   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:46.011463   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.011901   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.011944   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.012088   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:46.012304   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.012500   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.012647   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:46.012803   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:46.013202   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:46.013226   45441 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:38:46.135930   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647126.077813825
	
	I0130 20:38:46.135955   45441 fix.go:206] guest clock: 1706647126.077813825
	I0130 20:38:46.135965   45441 fix.go:219] Guest: 2024-01-30 20:38:46.077813825 +0000 UTC Remote: 2024-01-30 20:38:46.008513384 +0000 UTC m=+250.621109629 (delta=69.300441ms)
	I0130 20:38:46.135988   45441 fix.go:190] guest clock delta is within tolerance: 69.300441ms
	I0130 20:38:46.135993   45441 start.go:83] releasing machines lock for "default-k8s-diff-port-877742", held for 22.53196506s
	I0130 20:38:46.136021   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.136315   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:46.139211   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.139549   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.139581   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.139695   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.140243   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.140427   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.140507   45441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:38:46.140555   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:46.140639   45441 ssh_runner.go:195] Run: cat /version.json
	I0130 20:38:46.140661   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:46.143348   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.143614   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.143651   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.143675   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.143843   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:46.144027   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.144081   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.144110   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.144228   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:46.144253   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:46.144434   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.144434   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:46.144580   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:46.144707   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:46.241499   45441 ssh_runner.go:195] Run: systemctl --version
	I0130 20:38:46.264180   45441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:38:46.417654   45441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:38:46.423377   45441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:38:46.423450   45441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:38:46.439524   45441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:38:46.439549   45441 start.go:475] detecting cgroup driver to use...
	I0130 20:38:46.439612   45441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:38:46.456668   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:38:46.469494   45441 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:38:46.469547   45441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:38:46.482422   45441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:38:46.496031   45441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:38:46.601598   45441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:38:46.710564   45441 docker.go:233] disabling docker service ...
	I0130 20:38:46.710633   45441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:38:46.724084   45441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:38:46.736019   45441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:38:46.853310   45441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:38:46.976197   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:38:46.991033   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:38:47.009961   45441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:38:47.010028   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.019749   45441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:38:47.019822   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.032215   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.043642   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.056005   45441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:38:47.068954   45441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:38:47.079752   45441 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:38:47.079823   45441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:38:47.096106   45441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:38:47.109074   45441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:38:47.243783   45441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:38:47.468971   45441 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:38:47.469055   45441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:38:47.474571   45441 start.go:543] Will wait 60s for crictl version
	I0130 20:38:47.474646   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:38:47.479007   45441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:38:47.525155   45441 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:38:47.525259   45441 ssh_runner.go:195] Run: crio --version
	I0130 20:38:47.582308   45441 ssh_runner.go:195] Run: crio --version
	I0130 20:38:47.648689   45441 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 20:38:44.173930   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:44.197493   45037 api_server.go:72] duration metric: took 2.023971316s to wait for apiserver process to appear ...
	I0130 20:38:44.197522   45037 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:38:44.197545   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:44.198089   45037 api_server.go:269] stopped: https://192.168.61.63:8443/healthz: Get "https://192.168.61.63:8443/healthz": dial tcp 192.168.61.63:8443: connect: connection refused
	I0130 20:38:44.697622   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:48.683401   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:38:48.683435   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:38:48.683452   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:46.159722   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Start
	I0130 20:38:46.159892   45819 main.go:141] libmachine: (old-k8s-version-150971) Ensuring networks are active...
	I0130 20:38:46.160650   45819 main.go:141] libmachine: (old-k8s-version-150971) Ensuring network default is active
	I0130 20:38:46.160960   45819 main.go:141] libmachine: (old-k8s-version-150971) Ensuring network mk-old-k8s-version-150971 is active
	I0130 20:38:46.161374   45819 main.go:141] libmachine: (old-k8s-version-150971) Getting domain xml...
	I0130 20:38:46.162142   45819 main.go:141] libmachine: (old-k8s-version-150971) Creating domain...
	I0130 20:38:47.490526   45819 main.go:141] libmachine: (old-k8s-version-150971) Waiting to get IP...
	I0130 20:38:47.491491   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:47.491971   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:47.492059   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:47.491949   46425 retry.go:31] will retry after 201.906522ms: waiting for machine to come up
	I0130 20:38:47.695709   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:47.696195   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:47.696226   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:47.696146   46425 retry.go:31] will retry after 347.547284ms: waiting for machine to come up
	I0130 20:38:48.045541   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:48.046078   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:48.046102   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:48.046013   46425 retry.go:31] will retry after 373.23424ms: waiting for machine to come up
	I0130 20:38:48.420618   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:48.421238   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:48.421263   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:48.421188   46425 retry.go:31] will retry after 515.166265ms: waiting for machine to come up
	I0130 20:38:48.937713   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:48.942554   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:48.942581   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:48.942448   46425 retry.go:31] will retry after 626.563548ms: waiting for machine to come up
	I0130 20:38:49.570078   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:49.570658   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:49.570689   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:49.570550   46425 retry.go:31] will retry after 618.022034ms: waiting for machine to come up
	I0130 20:38:48.786797   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:38:48.786825   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:38:48.786848   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:48.837579   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:38:48.837608   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:38:49.198568   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:49.206091   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:38:49.206135   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:38:49.697669   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:49.707878   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:38:49.707912   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:38:50.198039   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:50.209003   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 200:
	ok
	I0130 20:38:50.228887   45037 api_server.go:141] control plane version: v1.28.4
	I0130 20:38:50.228967   45037 api_server.go:131] duration metric: took 6.031436808s to wait for apiserver health ...
	I0130 20:38:50.228981   45037 cni.go:84] Creating CNI manager for ""
	I0130 20:38:50.228991   45037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:38:50.230543   45037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:38:47.649943   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:47.653185   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:47.653623   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:47.653664   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:47.653933   45441 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0130 20:38:47.659385   45441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:47.675851   45441 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 20:38:47.675918   45441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:47.724799   45441 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 20:38:47.724883   45441 ssh_runner.go:195] Run: which lz4
	I0130 20:38:47.729563   45441 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 20:38:47.735015   45441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:38:47.735048   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 20:38:49.612191   45441 crio.go:444] Took 1.882668 seconds to copy over tarball
	I0130 20:38:49.612263   45441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 20:38:50.231895   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:38:50.262363   45037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:38:50.290525   45037 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:38:50.307654   45037 system_pods.go:59] 8 kube-system pods found
	I0130 20:38:50.307696   45037 system_pods.go:61] "coredns-5dd5756b68-jqzzv" [59f362b6-606e-4bcd-b5eb-c8822aaf8b9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:38:50.307708   45037 system_pods.go:61] "etcd-embed-certs-208583" [798094bf-2aac-4f39-afc1-4f873bdd08ee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 20:38:50.307721   45037 system_pods.go:61] "kube-apiserver-embed-certs-208583" [b96b9f6e-b36a-47bf-8f6d-01f883501766] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 20:38:50.307736   45037 system_pods.go:61] "kube-controller-manager-embed-certs-208583" [3dbd9e29-5c95-40f5-acd8-9767f6ce7a03] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 20:38:50.307751   45037 system_pods.go:61] "kube-proxy-g7q5t" [47f109e0-7a56-472f-8c7e-ba2b138de352] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 20:38:50.307760   45037 system_pods.go:61] "kube-scheduler-embed-certs-208583" [e8a37eb1-599f-478f-bbc1-b44b1020f291] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 20:38:50.307769   45037 system_pods.go:61] "metrics-server-57f55c9bc5-ghg9n" [37700115-83e9-440a-b396-56f50adb6311] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:38:50.307788   45037 system_pods.go:61] "storage-provisioner" [15108916-a630-4208-99f7-5706db407b22] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:38:50.307810   45037 system_pods.go:74] duration metric: took 17.261001ms to wait for pod list to return data ...
	I0130 20:38:50.307820   45037 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:38:50.317889   45037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:38:50.317926   45037 node_conditions.go:123] node cpu capacity is 2
	I0130 20:38:50.317939   45037 node_conditions.go:105] duration metric: took 10.11037ms to run NodePressure ...
	I0130 20:38:50.317960   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:50.681835   45037 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:38:50.688460   45037 kubeadm.go:787] kubelet initialised
	I0130 20:38:50.688488   45037 kubeadm.go:788] duration metric: took 6.61921ms waiting for restarted kubelet to initialise ...
	I0130 20:38:50.688498   45037 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:38:50.696051   45037 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.703680   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.703713   45037 pod_ready.go:81] duration metric: took 7.634057ms waiting for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.703724   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.703739   45037 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.710192   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "etcd-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.710216   45037 pod_ready.go:81] duration metric: took 6.467699ms waiting for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.710227   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "etcd-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.710235   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.720866   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.720894   45037 pod_ready.go:81] duration metric: took 10.648867ms waiting for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.720906   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.720914   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.731095   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.731162   45037 pod_ready.go:81] duration metric: took 10.237453ms waiting for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.731181   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.731190   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:51.097357   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-proxy-g7q5t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.097391   45037 pod_ready.go:81] duration metric: took 366.190232ms waiting for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:51.097404   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-proxy-g7q5t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.097413   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:51.499223   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.499261   45037 pod_ready.go:81] duration metric: took 401.839475ms waiting for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:51.499293   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.499303   45037 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:51.895725   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.895779   45037 pod_ready.go:81] duration metric: took 396.460908ms waiting for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:51.895798   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.895811   45037 pod_ready.go:38] duration metric: took 1.207302604s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:38:51.895836   45037 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:38:51.909431   45037 ops.go:34] apiserver oom_adj: -16
	I0130 20:38:51.909454   45037 kubeadm.go:640] restartCluster took 21.337960534s
	I0130 20:38:51.909472   45037 kubeadm.go:406] StartCluster complete in 21.386877314s
	I0130 20:38:51.909491   45037 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:51.909571   45037 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:38:51.911558   45037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:51.911793   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:38:51.911888   45037 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:38:51.911974   45037 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-208583"
	I0130 20:38:51.911995   45037 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-208583"
	W0130 20:38:51.912007   45037 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:38:51.912044   45037 config.go:182] Loaded profile config "embed-certs-208583": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:38:51.912101   45037 host.go:66] Checking if "embed-certs-208583" exists ...
	I0130 20:38:51.912138   45037 addons.go:69] Setting default-storageclass=true in profile "embed-certs-208583"
	I0130 20:38:51.912168   45037 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-208583"
	I0130 20:38:51.912131   45037 addons.go:69] Setting metrics-server=true in profile "embed-certs-208583"
	I0130 20:38:51.912238   45037 addons.go:234] Setting addon metrics-server=true in "embed-certs-208583"
	W0130 20:38:51.912250   45037 addons.go:243] addon metrics-server should already be in state true
	I0130 20:38:51.912328   45037 host.go:66] Checking if "embed-certs-208583" exists ...
	I0130 20:38:51.912537   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.912561   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.912583   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.912603   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.912686   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.912711   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.923647   45037 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-208583" context rescaled to 1 replicas
	I0130 20:38:51.923691   45037 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.63 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:38:51.926120   45037 out.go:177] * Verifying Kubernetes components...
	I0130 20:38:51.929413   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:38:51.930498   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I0130 20:38:51.930911   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0130 20:38:51.931075   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.931580   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.931988   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.932001   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.932296   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.932730   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.932756   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.933221   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.933273   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.933917   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.934492   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.934524   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.936079   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42667
	I0130 20:38:51.936488   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.937121   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.937144   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.937525   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.937703   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.941576   45037 addons.go:234] Setting addon default-storageclass=true in "embed-certs-208583"
	W0130 20:38:51.941597   45037 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:38:51.941623   45037 host.go:66] Checking if "embed-certs-208583" exists ...
	I0130 20:38:51.942033   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.942072   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.953268   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44577
	I0130 20:38:51.953715   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43785
	I0130 20:38:51.953863   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.954633   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.954659   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.954742   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.955212   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.955233   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.955318   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.955530   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.955663   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.955853   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.957839   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:51.958080   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:51.960896   45037 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:38:51.961493   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37549
	I0130 20:38:51.962677   45037 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:38:51.962838   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:38:51.964463   45037 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:38:51.964487   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:38:51.964518   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:51.964486   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:38:51.964554   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:51.963107   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.965261   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.965274   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.965656   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.966482   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.966520   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.968651   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.969034   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:51.969062   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.969307   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:51.969493   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:51.969580   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.969656   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:51.969809   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:51.970328   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:51.970372   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:51.970391   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.970521   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:51.970706   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:51.970866   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:51.985009   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33297
	I0130 20:38:51.985512   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.986096   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.986119   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.986558   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.986778   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.988698   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:51.991566   45037 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:38:51.991620   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:38:51.991647   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:51.994416   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.995367   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:51.995370   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:51.995439   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.995585   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:51.995740   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:51.995885   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:52.125074   45037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:38:52.140756   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:38:52.140787   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:38:52.180728   45037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:38:52.195559   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:38:52.195587   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:38:52.235770   45037 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0130 20:38:52.235779   45037 node_ready.go:35] waiting up to 6m0s for node "embed-certs-208583" to be "Ready" ...
	I0130 20:38:52.243414   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:38:52.243444   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:38:52.349604   45037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:38:54.111857   45037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.931041237s)
	I0130 20:38:54.111916   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.111938   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112013   45037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.986903299s)
	I0130 20:38:54.112051   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.112065   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112337   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112383   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112398   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.112403   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112411   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.112421   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.112426   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112434   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.112423   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112450   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112653   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112728   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112748   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.112770   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112797   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112806   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.119872   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.119893   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.120118   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.120138   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.121373   45037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.771724991s)
	I0130 20:38:54.121408   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.121421   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.121619   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.121636   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.121647   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.121655   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.121837   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.121853   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.121875   45037 addons.go:470] Verifying addon metrics-server=true in "embed-certs-208583"
	I0130 20:38:54.332655   45037 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 20:38:50.189837   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:50.190326   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:50.190352   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:50.190273   46425 retry.go:31] will retry after 843.505616ms: waiting for machine to come up
	I0130 20:38:51.035080   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:51.035482   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:51.035511   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:51.035454   46425 retry.go:31] will retry after 1.230675294s: waiting for machine to come up
	I0130 20:38:52.267754   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:52.268342   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:52.268365   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:52.268298   46425 retry.go:31] will retry after 1.516187998s: waiting for machine to come up
	I0130 20:38:53.785734   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:53.786142   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:53.786173   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:53.786084   46425 retry.go:31] will retry after 2.020274977s: waiting for machine to come up
	I0130 20:38:53.002777   45441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.390479779s)
	I0130 20:38:53.002812   45441 crio.go:451] Took 3.390595 seconds to extract the tarball
	I0130 20:38:53.002824   45441 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 20:38:53.059131   45441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:53.121737   45441 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 20:38:53.121765   45441 cache_images.go:84] Images are preloaded, skipping loading
	I0130 20:38:53.121837   45441 ssh_runner.go:195] Run: crio config
	I0130 20:38:53.187904   45441 cni.go:84] Creating CNI manager for ""
	I0130 20:38:53.187931   45441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:38:53.187953   45441 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:38:53.187982   45441 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.52 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-877742 NodeName:default-k8s-diff-port-877742 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:38:53.188157   45441 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.52
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-877742"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.52
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.52"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:38:53.188253   45441 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-877742 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-877742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0130 20:38:53.188320   45441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 20:38:53.200851   45441 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:38:53.200938   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:38:53.212897   45441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0130 20:38:53.231805   45441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:38:53.253428   45441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0130 20:38:53.274041   45441 ssh_runner.go:195] Run: grep 192.168.72.52	control-plane.minikube.internal$ /etc/hosts
	I0130 20:38:53.278499   45441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.52	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:53.295089   45441 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742 for IP: 192.168.72.52
	I0130 20:38:53.295126   45441 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:53.295326   45441 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:38:53.295393   45441 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:38:53.295497   45441 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.key
	I0130 20:38:53.295581   45441 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/apiserver.key.02e1fdc8
	I0130 20:38:53.295637   45441 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/proxy-client.key
	I0130 20:38:53.295774   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:38:53.295813   45441 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:38:53.295827   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:38:53.295864   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:38:53.295912   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:38:53.295948   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:38:53.296012   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:53.296828   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:38:53.326150   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 20:38:53.356286   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:38:53.384496   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 20:38:53.414403   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:38:53.440628   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:38:53.465452   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:38:53.494321   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:38:53.520528   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:38:53.543933   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:38:53.569293   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:38:53.594995   45441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:38:53.615006   45441 ssh_runner.go:195] Run: openssl version
	I0130 20:38:53.622442   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:38:53.636482   45441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:38:53.642501   45441 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:38:53.642572   45441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:38:53.649251   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:38:53.661157   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:38:53.673453   45441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:53.678369   45441 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:53.678439   45441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:53.684812   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:38:53.696906   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:38:53.710065   45441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:38:53.714715   45441 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:38:53.714776   45441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:38:53.720458   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:38:53.733050   45441 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:38:53.737894   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:38:53.744337   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:38:53.750326   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:38:53.756139   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:38:53.761883   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:38:53.767633   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:38:53.773367   45441 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-877742 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-877742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.52 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:38:53.773480   45441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:38:53.773551   45441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:38:53.815095   45441 cri.go:89] found id: ""
	I0130 20:38:53.815159   45441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:38:53.826497   45441 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:38:53.826521   45441 kubeadm.go:636] restartCluster start
	I0130 20:38:53.826570   45441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:38:53.837155   45441 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:53.838622   45441 kubeconfig.go:92] found "default-k8s-diff-port-877742" server: "https://192.168.72.52:8444"
	I0130 20:38:53.841776   45441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:38:53.852124   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:53.852191   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:53.864432   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:54.353064   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:54.353141   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:54.365422   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:54.853083   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:54.853170   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:54.869932   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:55.352281   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:55.352360   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:55.369187   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:54.550999   45037 addons.go:505] enable addons completed in 2.639107358s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 20:38:54.692017   45037 node_ready.go:58] node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:56.740251   45037 node_ready.go:58] node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:55.809310   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:55.809708   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:55.809741   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:55.809655   46425 retry.go:31] will retry after 1.997080797s: waiting for machine to come up
	I0130 20:38:57.808397   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:57.808798   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:57.808829   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:57.808744   46425 retry.go:31] will retry after 3.605884761s: waiting for machine to come up
	I0130 20:38:55.852241   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:55.852356   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:55.864923   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:56.352455   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:56.352559   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:56.368458   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:56.853090   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:56.853175   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:56.869148   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:57.352965   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:57.353055   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:57.370697   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:57.852261   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:57.852391   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:57.868729   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:58.352147   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:58.352250   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:58.368543   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:58.852300   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:58.852373   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:58.868594   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:59.353039   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:59.353110   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:59.365593   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:59.852147   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:59.852276   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:59.865561   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:00.353077   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:00.353186   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:00.370006   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:59.242842   45037 node_ready.go:58] node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:59.739830   45037 node_ready.go:49] node "embed-certs-208583" has status "Ready":"True"
	I0130 20:38:59.739851   45037 node_ready.go:38] duration metric: took 7.503983369s waiting for node "embed-certs-208583" to be "Ready" ...
	I0130 20:38:59.739859   45037 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:38:59.746243   45037 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.751722   45037 pod_ready.go:92] pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace has status "Ready":"True"
	I0130 20:38:59.751745   45037 pod_ready.go:81] duration metric: took 5.480115ms waiting for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.751752   45037 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.757152   45037 pod_ready.go:92] pod "etcd-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:38:59.757175   45037 pod_ready.go:81] duration metric: took 5.417291ms waiting for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.757184   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.762156   45037 pod_ready.go:92] pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:38:59.762231   45037 pod_ready.go:81] duration metric: took 4.985076ms waiting for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.762267   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:01.773853   45037 pod_ready.go:102] pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:01.415831   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:01.416304   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:39:01.416345   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:39:01.416273   46425 retry.go:31] will retry after 3.558433109s: waiting for machine to come up
	I0130 20:39:00.852444   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:00.852545   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:00.865338   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:01.353002   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:01.353101   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:01.366419   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:01.853034   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:01.853114   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:01.866142   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:02.352652   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:02.352752   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:02.364832   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:02.852325   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:02.852406   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:02.864013   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:03.352408   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:03.352518   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:03.363939   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:03.853126   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:03.853200   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:03.865047   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:03.865084   45441 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:39:03.865094   45441 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:39:03.865105   45441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:39:03.865154   45441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:03.904863   45441 cri.go:89] found id: ""
	I0130 20:39:03.904932   45441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:39:03.922225   45441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:39:03.931861   45441 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:39:03.931915   45441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:03.941185   45441 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:03.941205   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.064230   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.627940   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.816900   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.893059   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.986288   45441 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:39:04.986362   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:06.448368   44923 start.go:369] acquired machines lock for "no-preload-473743" in 58.134425603s
	I0130 20:39:06.448435   44923 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:39:06.448443   44923 fix.go:54] fixHost starting: 
	I0130 20:39:06.448866   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:39:06.448900   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:39:06.468570   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43389
	I0130 20:39:06.468965   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:39:06.469552   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:39:06.469587   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:39:06.469950   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:39:06.470153   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:06.470312   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:39:06.472312   44923 fix.go:102] recreateIfNeeded on no-preload-473743: state=Stopped err=<nil>
	I0130 20:39:06.472337   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	W0130 20:39:06.472495   44923 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:39:06.474460   44923 out.go:177] * Restarting existing kvm2 VM for "no-preload-473743" ...
	I0130 20:39:04.976314   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.976787   45819 main.go:141] libmachine: (old-k8s-version-150971) Found IP for machine: 192.168.39.16
	I0130 20:39:04.976820   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has current primary IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.976830   45819 main.go:141] libmachine: (old-k8s-version-150971) Reserving static IP address...
	I0130 20:39:04.977271   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "old-k8s-version-150971", mac: "52:54:00:6e:fe:f8", ip: "192.168.39.16"} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:04.977300   45819 main.go:141] libmachine: (old-k8s-version-150971) Reserved static IP address: 192.168.39.16
	I0130 20:39:04.977325   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | skip adding static IP to network mk-old-k8s-version-150971 - found existing host DHCP lease matching {name: "old-k8s-version-150971", mac: "52:54:00:6e:fe:f8", ip: "192.168.39.16"}
	I0130 20:39:04.977345   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Getting to WaitForSSH function...
	I0130 20:39:04.977361   45819 main.go:141] libmachine: (old-k8s-version-150971) Waiting for SSH to be available...
	I0130 20:39:04.979621   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.980015   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:04.980042   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.980138   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Using SSH client type: external
	I0130 20:39:04.980164   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa (-rw-------)
	I0130 20:39:04.980206   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:39:04.980221   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | About to run SSH command:
	I0130 20:39:04.980259   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | exit 0
	I0130 20:39:05.079758   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | SSH cmd err, output: <nil>: 
	I0130 20:39:05.080092   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetConfigRaw
	I0130 20:39:05.080846   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:05.083636   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.084019   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.084062   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.084354   45819 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/config.json ...
	I0130 20:39:05.084608   45819 machine.go:88] provisioning docker machine ...
	I0130 20:39:05.084635   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:05.084845   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetMachineName
	I0130 20:39:05.085031   45819 buildroot.go:166] provisioning hostname "old-k8s-version-150971"
	I0130 20:39:05.085063   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetMachineName
	I0130 20:39:05.085221   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.087561   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.087930   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.087963   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.088067   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.088220   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.088384   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.088550   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.088736   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:05.089124   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:05.089141   45819 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-150971 && echo "old-k8s-version-150971" | sudo tee /etc/hostname
	I0130 20:39:05.232496   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-150971
	
	I0130 20:39:05.232528   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.234898   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.235190   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.235227   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.235310   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.235515   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.235655   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.235791   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.235921   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:05.236233   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:05.236251   45819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-150971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-150971/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-150971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:39:05.370716   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:39:05.370753   45819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:39:05.370774   45819 buildroot.go:174] setting up certificates
	I0130 20:39:05.370787   45819 provision.go:83] configureAuth start
	I0130 20:39:05.370800   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetMachineName
	I0130 20:39:05.371158   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:05.373602   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.373946   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.373970   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.374153   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.376230   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.376617   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.376657   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.376763   45819 provision.go:138] copyHostCerts
	I0130 20:39:05.376816   45819 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:39:05.376826   45819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:39:05.376892   45819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:39:05.377066   45819 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:39:05.377079   45819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:39:05.377113   45819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:39:05.377205   45819 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:39:05.377216   45819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:39:05.377243   45819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:39:05.377336   45819 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-150971 san=[192.168.39.16 192.168.39.16 localhost 127.0.0.1 minikube old-k8s-version-150971]
	I0130 20:39:05.649128   45819 provision.go:172] copyRemoteCerts
	I0130 20:39:05.649183   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:39:05.649206   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.652019   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.652353   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.652385   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.652657   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.652857   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.653048   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.653207   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:05.753981   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0130 20:39:05.782847   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 20:39:05.810083   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:39:05.836967   45819 provision.go:86] duration metric: configureAuth took 466.16712ms
	I0130 20:39:05.836989   45819 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:39:05.837156   45819 config.go:182] Loaded profile config "old-k8s-version-150971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 20:39:05.837222   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.840038   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.840384   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.840422   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.840597   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.840832   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.841019   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.841167   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.841338   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:05.841681   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:05.841700   45819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:39:06.170121   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:39:06.170151   45819 machine.go:91] provisioned docker machine in 1.08552444s
	I0130 20:39:06.170163   45819 start.go:300] post-start starting for "old-k8s-version-150971" (driver="kvm2")
	I0130 20:39:06.170183   45819 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:39:06.170202   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.170544   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:39:06.170583   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.173794   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.174165   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.174192   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.174421   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.174620   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.174804   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.174947   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:06.273272   45819 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:39:06.277900   45819 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:39:06.277928   45819 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:39:06.278010   45819 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:39:06.278099   45819 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:39:06.278207   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:39:06.286905   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:06.311772   45819 start.go:303] post-start completed in 141.592454ms
	I0130 20:39:06.311808   45819 fix.go:56] fixHost completed within 20.175639407s
	I0130 20:39:06.311832   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.314627   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.314998   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.315027   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.315179   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.315402   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.315585   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.315758   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.315936   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:06.316254   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:06.316269   45819 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:39:06.448193   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647146.389757507
	
	I0130 20:39:06.448219   45819 fix.go:206] guest clock: 1706647146.389757507
	I0130 20:39:06.448230   45819 fix.go:219] Guest: 2024-01-30 20:39:06.389757507 +0000 UTC Remote: 2024-01-30 20:39:06.311812895 +0000 UTC m=+176.717060563 (delta=77.944612ms)
	I0130 20:39:06.448277   45819 fix.go:190] guest clock delta is within tolerance: 77.944612ms
	I0130 20:39:06.448285   45819 start.go:83] releasing machines lock for "old-k8s-version-150971", held for 20.312150878s
	I0130 20:39:06.448318   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.448584   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:06.451978   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.452448   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.452475   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.452632   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.453188   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.453364   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.453450   45819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:39:06.453501   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.453604   45819 ssh_runner.go:195] Run: cat /version.json
	I0130 20:39:06.453622   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.456426   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.456694   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.456722   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.456743   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.457015   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.457218   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.457228   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.457266   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.457473   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.457483   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.457648   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.457657   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:06.457834   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.457945   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:06.575025   45819 ssh_runner.go:195] Run: systemctl --version
	I0130 20:39:06.580884   45819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:39:06.730119   45819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:39:06.737872   45819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:39:06.737945   45819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:39:06.752952   45819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:39:06.752987   45819 start.go:475] detecting cgroup driver to use...
	I0130 20:39:06.753062   45819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:39:06.772925   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:39:06.787880   45819 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:39:06.787957   45819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:39:06.805662   45819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:39:06.820819   45819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:39:06.941809   45819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:39:07.067216   45819 docker.go:233] disabling docker service ...
	I0130 20:39:07.067299   45819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:39:07.084390   45819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:39:07.099373   45819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:39:07.242239   45819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:39:07.378297   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:39:07.390947   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:39:07.414177   45819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0130 20:39:07.414256   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.427074   45819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:39:07.427154   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.439058   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.451547   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.462473   45819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:39:07.474082   45819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:39:07.484883   45819 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:39:07.484943   45819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:39:07.502181   45819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:39:07.511315   45819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:39:07.677114   45819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:39:07.878176   45819 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:39:07.878247   45819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:39:07.885855   45819 start.go:543] Will wait 60s for crictl version
	I0130 20:39:07.885918   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:07.895480   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:39:07.946256   45819 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:39:07.946344   45819 ssh_runner.go:195] Run: crio --version
	I0130 20:39:07.999647   45819 ssh_runner.go:195] Run: crio --version
	I0130 20:39:08.064335   45819 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0130 20:39:04.270868   45037 pod_ready.go:92] pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:04.270900   45037 pod_ready.go:81] duration metric: took 4.508624463s waiting for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.270911   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.276806   45037 pod_ready.go:92] pod "kube-proxy-g7q5t" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:04.276830   45037 pod_ready.go:81] duration metric: took 5.914142ms waiting for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.276839   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.283207   45037 pod_ready.go:92] pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:04.283225   45037 pod_ready.go:81] duration metric: took 6.380407ms waiting for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.283235   45037 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:06.291591   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:08.318273   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:08.065754   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:08.068986   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:08.069433   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:08.069477   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:08.069665   45819 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 20:39:08.074101   45819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:08.088404   45819 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 20:39:08.088468   45819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:39:08.133749   45819 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0130 20:39:08.133831   45819 ssh_runner.go:195] Run: which lz4
	I0130 20:39:08.138114   45819 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 20:39:08.142668   45819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:39:08.142709   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0130 20:39:05.487066   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:05.987386   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:06.486465   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:06.987491   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:07.486540   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:07.518826   45441 api_server.go:72] duration metric: took 2.532536561s to wait for apiserver process to appear ...
	I0130 20:39:07.518852   45441 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:39:07.518875   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:06.475902   44923 main.go:141] libmachine: (no-preload-473743) Calling .Start
	I0130 20:39:06.476095   44923 main.go:141] libmachine: (no-preload-473743) Ensuring networks are active...
	I0130 20:39:06.476929   44923 main.go:141] libmachine: (no-preload-473743) Ensuring network default is active
	I0130 20:39:06.477344   44923 main.go:141] libmachine: (no-preload-473743) Ensuring network mk-no-preload-473743 is active
	I0130 20:39:06.477817   44923 main.go:141] libmachine: (no-preload-473743) Getting domain xml...
	I0130 20:39:06.478643   44923 main.go:141] libmachine: (no-preload-473743) Creating domain...
	I0130 20:39:07.834909   44923 main.go:141] libmachine: (no-preload-473743) Waiting to get IP...
	I0130 20:39:07.835906   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:07.836320   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:07.836371   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:07.836287   46613 retry.go:31] will retry after 205.559104ms: waiting for machine to come up
	I0130 20:39:08.043926   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:08.044522   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:08.044607   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:08.044570   46613 retry.go:31] will retry after 291.055623ms: waiting for machine to come up
	I0130 20:39:08.337157   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:08.337756   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:08.337859   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:08.337823   46613 retry.go:31] will retry after 462.903788ms: waiting for machine to come up
	I0130 20:39:08.802588   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:08.803397   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:08.803497   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:08.803459   46613 retry.go:31] will retry after 497.808285ms: waiting for machine to come up
	I0130 20:39:09.303349   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:09.304015   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:09.304037   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:09.303936   46613 retry.go:31] will retry after 569.824748ms: waiting for machine to come up
	I0130 20:39:09.875816   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:09.876316   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:09.876348   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:09.876259   46613 retry.go:31] will retry after 589.654517ms: waiting for machine to come up
	I0130 20:39:10.467029   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:10.467568   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:10.467601   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:10.467520   46613 retry.go:31] will retry after 857.069247ms: waiting for machine to come up
	I0130 20:39:10.796542   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:13.290072   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:09.980254   45819 crio.go:444] Took 1.842164 seconds to copy over tarball
	I0130 20:39:09.980328   45819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 20:39:13.116148   45819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.135783447s)
	I0130 20:39:13.116184   45819 crio.go:451] Took 3.135904 seconds to extract the tarball
	I0130 20:39:13.116196   45819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 20:39:13.161285   45819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:39:13.226970   45819 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0130 20:39:13.227008   45819 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 20:39:13.227096   45819 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.227151   45819 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.227153   45819 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.227173   45819 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.227121   45819 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:13.227155   45819 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0130 20:39:13.227439   45819 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.227117   45819 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.229003   45819 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.229038   45819 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:13.229065   45819 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.229112   45819 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.229011   45819 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0130 20:39:13.229170   45819 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.229177   45819 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.229217   45819 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.443441   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.484878   45819 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0130 20:39:13.484941   45819 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.485021   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.489291   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.526847   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.526966   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.527312   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0130 20:39:13.528949   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.532002   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0130 20:39:13.532309   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.532701   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.662312   45819 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0130 20:39:13.662355   45819 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.662422   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.669155   45819 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0130 20:39:13.669201   45819 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.669265   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708339   45819 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0130 20:39:13.708373   45819 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0130 20:39:13.708398   45819 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0130 20:39:13.708404   45819 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.708435   45819 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0130 20:39:13.708470   45819 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.708476   45819 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0130 20:39:13.708491   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.708507   45819 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.708508   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708451   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708443   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708565   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.708549   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.767721   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.767762   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.767789   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0130 20:39:13.767835   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0130 20:39:13.767869   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0130 20:39:13.767935   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.816661   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0130 20:39:13.863740   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0130 20:39:13.863751   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0130 20:39:13.863798   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0130 20:39:14.096216   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:14.241457   45819 cache_images.go:92] LoadImages completed in 1.014424533s
	W0130 20:39:14.241562   45819 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0130 20:39:14.241641   45819 ssh_runner.go:195] Run: crio config
	I0130 20:39:14.307624   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:39:14.307644   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:14.307673   45819 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:39:14.307696   45819 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-150971 NodeName:old-k8s-version-150971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0130 20:39:14.307866   45819 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-150971"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-150971
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.16:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:39:14.307973   45819 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-150971 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:39:14.308042   45819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0130 20:39:14.318757   45819 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:39:14.318830   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:39:14.329640   45819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0130 20:39:14.347498   45819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:39:14.365403   45819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0130 20:39:14.383846   45819 ssh_runner.go:195] Run: grep 192.168.39.16	control-plane.minikube.internal$ /etc/hosts
	I0130 20:39:14.388138   45819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:14.402420   45819 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971 for IP: 192.168.39.16
	I0130 20:39:14.402483   45819 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:39:14.402661   45819 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:39:14.402707   45819 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:39:14.402780   45819 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.key
	I0130 20:39:14.402837   45819 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/apiserver.key.5918fcb3
	I0130 20:39:14.402877   45819 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/proxy-client.key
	I0130 20:39:14.403025   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:39:14.403076   45819 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:39:14.403094   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:39:14.403131   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:39:14.403171   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:39:14.403206   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:39:14.403290   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:14.404157   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:39:14.430902   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 20:39:14.454554   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:39:14.482335   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 20:39:14.505963   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:39:14.532616   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:39:14.558930   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:39:14.585784   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:39:14.609214   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:39:14.635743   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:39:12.268901   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:12.268934   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:12.268948   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:12.307051   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:12.307093   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:12.519619   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:12.530857   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:12.530904   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:13.019370   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:13.024544   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:13.024577   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:13.519023   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:13.525748   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:13.525784   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:14.019318   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:14.026570   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:14.026600   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:14.519000   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:15.074306   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:15.074341   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:15.074353   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:15.081035   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:15.081075   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:11.325993   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:11.326475   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:11.326506   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:11.326439   46613 retry.go:31] will retry after 994.416536ms: waiting for machine to come up
	I0130 20:39:12.323190   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:12.323897   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:12.323924   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:12.323807   46613 retry.go:31] will retry after 1.746704262s: waiting for machine to come up
	I0130 20:39:14.072583   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:14.073100   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:14.073158   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:14.073072   46613 retry.go:31] will retry after 2.322781715s: waiting for machine to come up
	I0130 20:39:15.519005   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:15.609496   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:15.609529   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:16.018990   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:16.024390   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 200:
	ok
	I0130 20:39:16.037151   45441 api_server.go:141] control plane version: v1.28.4
	I0130 20:39:16.037191   45441 api_server.go:131] duration metric: took 8.518327222s to wait for apiserver health ...
	I0130 20:39:16.037203   45441 cni.go:84] Creating CNI manager for ""
	I0130 20:39:16.037211   45441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:16.039114   45441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:39:15.290788   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:17.292552   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:14.662372   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:39:14.814291   45819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:39:14.832453   45819 ssh_runner.go:195] Run: openssl version
	I0130 20:39:14.838238   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:39:14.848628   45819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:39:14.853713   45819 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:39:14.853761   45819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:39:14.859768   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:39:14.870658   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:39:14.881444   45819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:14.886241   45819 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:14.886290   45819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:14.892197   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:39:14.903459   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:39:14.914463   45819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:39:14.919337   45819 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:39:14.919413   45819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:39:14.925258   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:39:14.935893   45819 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:39:14.941741   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:39:14.948871   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:39:14.955038   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:39:14.961605   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:39:14.967425   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:39:14.973845   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:39:14.980072   45819 kubeadm.go:404] StartCluster: {Name:old-k8s-version-150971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:39:14.980218   45819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:39:14.980265   45819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:15.021821   45819 cri.go:89] found id: ""
	I0130 20:39:15.021920   45819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:39:15.033604   45819 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:39:15.033629   45819 kubeadm.go:636] restartCluster start
	I0130 20:39:15.033686   45819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:39:15.044324   45819 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:15.045356   45819 kubeconfig.go:92] found "old-k8s-version-150971" server: "https://192.168.39.16:8443"
	I0130 20:39:15.047610   45819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:39:15.057690   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:15.057746   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:15.073207   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:15.558392   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:15.558480   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:15.574711   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:16.057794   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:16.057971   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:16.073882   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:16.557808   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:16.557879   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:16.571659   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:17.057817   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:17.057922   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:17.074250   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:17.557727   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:17.557809   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:17.573920   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:18.058504   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:18.058573   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:18.070636   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:18.558163   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:18.558262   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:18.570781   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:19.058321   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:19.058414   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:19.074887   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:19.558503   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:19.558596   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:19.570666   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:16.040606   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:39:16.065469   45441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:39:16.099751   45441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:39:16.113444   45441 system_pods.go:59] 8 kube-system pods found
	I0130 20:39:16.113486   45441 system_pods.go:61] "coredns-5dd5756b68-2955f" [abae9f5c-ed48-494b-b014-a28f6290d772] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:39:16.113498   45441 system_pods.go:61] "etcd-default-k8s-diff-port-877742" [0f69a8d9-5549-4f3a-8b12-ee9e96e08271] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 20:39:16.113509   45441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-877742" [ab6cf2c3-cc75-44b8-b039-6e21881a9ade] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 20:39:16.113519   45441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-877742" [4b313734-cd1e-4229-afcd-4d0b517594ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 20:39:16.113533   45441 system_pods.go:61] "kube-proxy-s9ssn" [ea1c69e6-d319-41ee-a47f-4937f03ecdc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 20:39:16.113549   45441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-877742" [3f4d9e5f-1421-4576-839b-3bdfba56700b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 20:39:16.113566   45441 system_pods.go:61] "metrics-server-57f55c9bc5-hzfwg" [1e06ac92-f7ff-418a-9a8d-72d763566bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:39:16.113582   45441 system_pods.go:61] "storage-provisioner" [4cf793ab-e7a5-4a51-bcb9-a07bea323a44] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:39:16.113599   45441 system_pods.go:74] duration metric: took 13.827445ms to wait for pod list to return data ...
	I0130 20:39:16.113608   45441 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:39:16.121786   45441 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:39:16.121882   45441 node_conditions.go:123] node cpu capacity is 2
	I0130 20:39:16.121904   45441 node_conditions.go:105] duration metric: took 8.289345ms to run NodePressure ...
	I0130 20:39:16.121929   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:16.440112   45441 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:39:16.447160   45441 kubeadm.go:787] kubelet initialised
	I0130 20:39:16.447188   45441 kubeadm.go:788] duration metric: took 7.04624ms waiting for restarted kubelet to initialise ...
	I0130 20:39:16.447198   45441 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:39:16.457164   45441 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-2955f" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:16.463990   45441 pod_ready.go:97] node "default-k8s-diff-port-877742" hosting pod "coredns-5dd5756b68-2955f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.464020   45441 pod_ready.go:81] duration metric: took 6.825543ms waiting for pod "coredns-5dd5756b68-2955f" in "kube-system" namespace to be "Ready" ...
	E0130 20:39:16.464033   45441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-877742" hosting pod "coredns-5dd5756b68-2955f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.464044   45441 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:16.476983   45441 pod_ready.go:97] node "default-k8s-diff-port-877742" hosting pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.477077   45441 pod_ready.go:81] duration metric: took 12.988392ms waiting for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	E0130 20:39:16.477109   45441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-877742" hosting pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.477128   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:18.486083   45441 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:16.397588   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:16.398050   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:16.398082   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:16.397988   46613 retry.go:31] will retry after 2.411227582s: waiting for machine to come up
	I0130 20:39:18.810874   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:18.811404   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:18.811439   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:18.811358   46613 retry.go:31] will retry after 2.231016506s: waiting for machine to come up
	I0130 20:39:19.296383   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:21.790307   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:20.058718   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:20.058800   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:20.074443   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:20.558683   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:20.558756   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:20.574765   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:21.058367   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:21.058456   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:21.074652   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:21.558528   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:21.558648   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:21.573650   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:22.058161   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:22.058280   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:22.070780   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:22.558448   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:22.558525   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:22.572220   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:23.057797   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:23.057884   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:23.071260   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:23.558193   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:23.558278   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:23.571801   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:24.058483   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:24.058603   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:24.070898   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:24.558465   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:24.558546   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:24.573966   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:21.008056   45441 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:23.484244   45441 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:23.987592   45441 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:23.987615   45441 pod_ready.go:81] duration metric: took 7.510477497s waiting for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.987624   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.993335   45441 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:23.993358   45441 pod_ready.go:81] duration metric: took 5.726687ms waiting for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.993373   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s9ssn" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.998021   45441 pod_ready.go:92] pod "kube-proxy-s9ssn" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:23.998045   45441 pod_ready.go:81] duration metric: took 4.664039ms waiting for pod "kube-proxy-s9ssn" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.998057   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:21.044853   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:21.045392   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:21.045423   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:21.045336   46613 retry.go:31] will retry after 3.525646558s: waiting for machine to come up
	I0130 20:39:24.573139   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:24.573573   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:24.573596   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:24.573532   46613 retry.go:31] will retry after 4.365207536s: waiting for machine to come up
	I0130 20:39:23.790893   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:25.791630   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:28.291352   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:25.058653   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:25.058753   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:25.072061   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:25.072091   45819 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:39:25.072115   45819 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:39:25.072127   45819 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:39:25.072183   45819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:25.121788   45819 cri.go:89] found id: ""
	I0130 20:39:25.121863   45819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:39:25.137294   45819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:39:25.146157   45819 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:39:25.146213   45819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:25.155323   45819 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:25.155346   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:25.279765   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:26.617419   45819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.337617183s)
	I0130 20:39:26.617457   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:26.825384   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:26.916818   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:27.026546   45819 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:39:27.026647   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:27.527637   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:28.026724   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:28.527352   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:28.578771   45819 api_server.go:72] duration metric: took 1.552227614s to wait for apiserver process to appear ...
	I0130 20:39:28.578793   45819 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:39:28.578814   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:28.579348   45819 api_server.go:269] stopped: https://192.168.39.16:8443/healthz: Get "https://192.168.39.16:8443/healthz": dial tcp 192.168.39.16:8443: connect: connection refused
	I0130 20:39:29.078918   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:26.006018   45441 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:27.506562   45441 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:27.506596   45441 pod_ready.go:81] duration metric: took 3.50852897s waiting for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:27.506609   45441 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:29.514067   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:28.941922   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.942489   44923 main.go:141] libmachine: (no-preload-473743) Found IP for machine: 192.168.50.220
	I0130 20:39:28.942511   44923 main.go:141] libmachine: (no-preload-473743) Reserving static IP address...
	I0130 20:39:28.942528   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has current primary IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.943003   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "no-preload-473743", mac: "52:54:00:c5:07:4a", ip: "192.168.50.220"} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:28.943046   44923 main.go:141] libmachine: (no-preload-473743) DBG | skip adding static IP to network mk-no-preload-473743 - found existing host DHCP lease matching {name: "no-preload-473743", mac: "52:54:00:c5:07:4a", ip: "192.168.50.220"}
	I0130 20:39:28.943063   44923 main.go:141] libmachine: (no-preload-473743) Reserved static IP address: 192.168.50.220
	I0130 20:39:28.943081   44923 main.go:141] libmachine: (no-preload-473743) DBG | Getting to WaitForSSH function...
	I0130 20:39:28.943092   44923 main.go:141] libmachine: (no-preload-473743) Waiting for SSH to be available...
	I0130 20:39:28.945617   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.945983   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:28.946016   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.946192   44923 main.go:141] libmachine: (no-preload-473743) DBG | Using SSH client type: external
	I0130 20:39:28.946224   44923 main.go:141] libmachine: (no-preload-473743) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa (-rw-------)
	I0130 20:39:28.946257   44923 main.go:141] libmachine: (no-preload-473743) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:39:28.946268   44923 main.go:141] libmachine: (no-preload-473743) DBG | About to run SSH command:
	I0130 20:39:28.946279   44923 main.go:141] libmachine: (no-preload-473743) DBG | exit 0
	I0130 20:39:29.047127   44923 main.go:141] libmachine: (no-preload-473743) DBG | SSH cmd err, output: <nil>: 
	I0130 20:39:29.047505   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetConfigRaw
	I0130 20:39:29.048239   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:29.051059   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.051539   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.051572   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.051875   44923 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/config.json ...
	I0130 20:39:29.052098   44923 machine.go:88] provisioning docker machine ...
	I0130 20:39:29.052122   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:29.052328   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetMachineName
	I0130 20:39:29.052480   44923 buildroot.go:166] provisioning hostname "no-preload-473743"
	I0130 20:39:29.052503   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetMachineName
	I0130 20:39:29.052693   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.055532   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.055937   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.055968   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.056075   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.056267   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.056428   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.056644   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.056802   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:29.057242   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:29.057266   44923 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-473743 && echo "no-preload-473743" | sudo tee /etc/hostname
	I0130 20:39:29.199944   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-473743
	
	I0130 20:39:29.199987   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.202960   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.203402   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.203428   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.203648   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.203840   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.203974   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.204101   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.204253   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:29.204787   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:29.204815   44923 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-473743' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-473743/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-473743' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:39:29.343058   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:39:29.343090   44923 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:39:29.343118   44923 buildroot.go:174] setting up certificates
	I0130 20:39:29.343131   44923 provision.go:83] configureAuth start
	I0130 20:39:29.343154   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetMachineName
	I0130 20:39:29.343457   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:29.346265   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.346671   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.346714   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.346889   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.349402   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.349799   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.349830   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.350015   44923 provision.go:138] copyHostCerts
	I0130 20:39:29.350079   44923 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:39:29.350092   44923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:39:29.350146   44923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:39:29.350244   44923 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:39:29.350253   44923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:39:29.350277   44923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:39:29.350343   44923 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:39:29.350354   44923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:39:29.350371   44923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:39:29.350428   44923 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.no-preload-473743 san=[192.168.50.220 192.168.50.220 localhost 127.0.0.1 minikube no-preload-473743]
	I0130 20:39:29.671070   44923 provision.go:172] copyRemoteCerts
	I0130 20:39:29.671125   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:39:29.671150   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.673890   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.674199   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.674234   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.674386   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.674604   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.674744   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.674901   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:29.769184   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:39:29.797035   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 20:39:29.822932   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 20:39:29.849781   44923 provision.go:86] duration metric: configureAuth took 506.627652ms
	I0130 20:39:29.849818   44923 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:39:29.850040   44923 config.go:182] Loaded profile config "no-preload-473743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 20:39:29.850134   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.852709   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.853108   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.853137   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.853331   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.853574   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.853757   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.853924   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.854108   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:29.854635   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:29.854660   44923 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:39:30.232249   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:39:30.232288   44923 machine.go:91] provisioned docker machine in 1.180174143s
	I0130 20:39:30.232302   44923 start.go:300] post-start starting for "no-preload-473743" (driver="kvm2")
	I0130 20:39:30.232321   44923 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:39:30.232348   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.232668   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:39:30.232705   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.235383   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.235716   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.235745   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.235860   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.236049   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.236203   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.236346   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:30.332330   44923 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:39:30.337659   44923 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:39:30.337684   44923 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:39:30.337756   44923 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:39:30.337847   44923 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:39:30.337960   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:39:30.349830   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:30.374759   44923 start.go:303] post-start completed in 142.443985ms
	I0130 20:39:30.374780   44923 fix.go:56] fixHost completed within 23.926338591s
	I0130 20:39:30.374800   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.377807   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.378189   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.378244   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.378414   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.378605   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.378803   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.378954   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.379112   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:30.379649   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:30.379677   44923 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:39:30.512888   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647170.453705676
	
	I0130 20:39:30.512916   44923 fix.go:206] guest clock: 1706647170.453705676
	I0130 20:39:30.512925   44923 fix.go:219] Guest: 2024-01-30 20:39:30.453705676 +0000 UTC Remote: 2024-01-30 20:39:30.374783103 +0000 UTC m=+364.620017880 (delta=78.922573ms)
	I0130 20:39:30.512966   44923 fix.go:190] guest clock delta is within tolerance: 78.922573ms
	I0130 20:39:30.512976   44923 start.go:83] releasing machines lock for "no-preload-473743", held for 24.064563389s
	I0130 20:39:30.513083   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.513387   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:30.516359   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.516699   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.516728   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.516908   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.517590   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.517747   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.517817   44923 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:39:30.517864   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.517954   44923 ssh_runner.go:195] Run: cat /version.json
	I0130 20:39:30.517972   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.520814   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521070   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521202   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.521228   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521456   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.521480   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521480   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.521682   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.521722   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.521844   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.521845   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.521997   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:30.522149   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.522424   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:30.632970   44923 ssh_runner.go:195] Run: systemctl --version
	I0130 20:39:30.638936   44923 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:39:30.784288   44923 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:39:30.792079   44923 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:39:30.792150   44923 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:39:30.809394   44923 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:39:30.809421   44923 start.go:475] detecting cgroup driver to use...
	I0130 20:39:30.809496   44923 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:39:30.824383   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:39:30.838710   44923 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:39:30.838765   44923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:39:30.852928   44923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:39:30.867162   44923 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:39:30.995737   44923 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:39:31.113661   44923 docker.go:233] disabling docker service ...
	I0130 20:39:31.113726   44923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:39:31.127737   44923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:39:31.139320   44923 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:39:31.240000   44923 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:39:31.340063   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:39:31.353303   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:39:31.371834   44923 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:39:31.371889   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.382579   44923 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:39:31.382639   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.392544   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.403023   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.413288   44923 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:39:31.423806   44923 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:39:31.433817   44923 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:39:31.433884   44923 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:39:31.447456   44923 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:39:31.457035   44923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:39:31.562847   44923 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:39:31.752772   44923 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:39:31.752844   44923 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:39:31.757880   44923 start.go:543] Will wait 60s for crictl version
	I0130 20:39:31.757943   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:31.761967   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:39:31.800658   44923 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:39:31.800725   44923 ssh_runner.go:195] Run: crio --version
	I0130 20:39:31.852386   44923 ssh_runner.go:195] Run: crio --version
	I0130 20:39:31.910758   44923 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0130 20:39:30.791795   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:33.292307   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:34.079616   45819 api_server.go:269] stopped: https://192.168.39.16:8443/healthz: Get "https://192.168.39.16:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0130 20:39:34.079674   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:31.516571   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:33.517547   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:31.912241   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:31.915377   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:31.915705   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:31.915735   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:31.915985   44923 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0130 20:39:31.920870   44923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:31.934252   44923 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 20:39:31.934304   44923 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:39:31.975687   44923 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0130 20:39:31.975714   44923 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 20:39:31.975762   44923 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:31.975874   44923 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:31.975900   44923 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:31.975936   44923 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0130 20:39:31.975959   44923 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:31.975903   44923 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:31.976051   44923 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:31.976063   44923 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:31.977466   44923 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:31.977485   44923 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:31.977525   44923 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0130 20:39:31.977531   44923 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:31.977569   44923 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:31.977559   44923 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:31.977663   44923 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:31.977812   44923 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:32.130396   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0130 20:39:32.132105   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:32.135297   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:32.135817   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:32.136698   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:32.154928   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:32.172264   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:32.355420   44923 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0130 20:39:32.355504   44923 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:32.355537   44923 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0130 20:39:32.355580   44923 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:32.355454   44923 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0130 20:39:32.355636   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355675   44923 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:32.355606   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355724   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355763   44923 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0130 20:39:32.355803   44923 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:32.355844   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355855   44923 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0130 20:39:32.355884   44923 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:32.355805   44923 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0130 20:39:32.355928   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355929   44923 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:32.355974   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.360081   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:32.370164   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:32.370202   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:32.370243   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:32.370259   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:32.370374   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:32.466609   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.466714   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.503174   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:32.503293   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:32.507888   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0130 20:39:32.507963   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0130 20:39:32.508061   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0130 20:39:32.508061   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0130 20:39:32.518772   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:32.518883   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0130 20:39:32.518906   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.518932   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:32.518951   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.518824   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0130 20:39:32.518996   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0130 20:39:32.519041   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 20:39:32.521450   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0130 20:39:32.521493   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0130 20:39:32.848844   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:34.579929   44923 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.060972543s)
	I0130 20:39:34.579971   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0130 20:39:34.580001   44923 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.060936502s)
	I0130 20:39:34.580034   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0130 20:39:34.580045   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.061073363s)
	I0130 20:39:34.580059   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0130 20:39:34.580082   44923 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.731208309s)
	I0130 20:39:34.580132   44923 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0130 20:39:34.580088   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:34.580225   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:34.580173   44923 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:34.580343   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:34.585311   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:34.796586   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:34.796615   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:34.796633   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:34.846035   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:34.846071   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:35.079544   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:35.091673   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 20:39:35.091710   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 20:39:35.579233   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:35.587045   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 20:39:35.587071   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 20:39:36.079775   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:36.086927   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0130 20:39:36.095953   45819 api_server.go:141] control plane version: v1.16.0
	I0130 20:39:36.095976   45819 api_server.go:131] duration metric: took 7.517178171s to wait for apiserver health ...
	I0130 20:39:36.095985   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:39:36.095992   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:36.097742   45819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:39:35.792385   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:37.792648   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:36.099012   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:39:36.108427   45819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:39:36.126083   45819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:39:36.138855   45819 system_pods.go:59] 8 kube-system pods found
	I0130 20:39:36.138882   45819 system_pods.go:61] "coredns-5644d7b6d9-547k4" [6b1119fe-9c8a-44fb-b813-58271228b290] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:39:36.138888   45819 system_pods.go:61] "coredns-5644d7b6d9-dtfzh" [4cbd4f36-bc01-4f55-ba50-b7dcdcb35b9b] Running
	I0130 20:39:36.138894   45819 system_pods.go:61] "etcd-old-k8s-version-150971" [22eeed2c-7454-4b9d-8b4d-1c9a2e5feaf7] Running
	I0130 20:39:36.138899   45819 system_pods.go:61] "kube-apiserver-old-k8s-version-150971" [5ef062e6-2f78-485f-9420-e8714128e39f] Running
	I0130 20:39:36.138903   45819 system_pods.go:61] "kube-controller-manager-old-k8s-version-150971" [4e5df6df-486e-47a8-89b8-8d962212ec3e] Running
	I0130 20:39:36.138907   45819 system_pods.go:61] "kube-proxy-ncl7z" [51b28456-0070-46fc-b647-e28d6bdcfde2] Running
	I0130 20:39:36.138914   45819 system_pods.go:61] "kube-scheduler-old-k8s-version-150971" [384c4dfa-180b-4ec3-9e12-3c6d9e581c0e] Running
	I0130 20:39:36.138918   45819 system_pods.go:61] "storage-provisioner" [8a75a04c-1b80-41f6-9dfd-a7ee6f908b9d] Running
	I0130 20:39:36.138928   45819 system_pods.go:74] duration metric: took 12.820934ms to wait for pod list to return data ...
	I0130 20:39:36.138936   45819 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:39:36.142193   45819 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:39:36.142224   45819 node_conditions.go:123] node cpu capacity is 2
	I0130 20:39:36.142236   45819 node_conditions.go:105] duration metric: took 3.295582ms to run NodePressure ...
	I0130 20:39:36.142256   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:36.480656   45819 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:39:36.486153   45819 retry.go:31] will retry after 323.854639ms: kubelet not initialised
	I0130 20:39:36.816707   45819 retry.go:31] will retry after 303.422684ms: kubelet not initialised
	I0130 20:39:37.125369   45819 retry.go:31] will retry after 697.529029ms: kubelet not initialised
	I0130 20:39:37.829322   45819 retry.go:31] will retry after 626.989047ms: kubelet not initialised
	I0130 20:39:38.463635   45819 retry.go:31] will retry after 1.390069174s: kubelet not initialised
	I0130 20:39:35.519218   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:38.013652   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:40.014621   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:37.168054   44923 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.582708254s)
	I0130 20:39:37.168111   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0130 20:39:37.168188   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.587929389s)
	I0130 20:39:37.168204   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0130 20:39:37.168226   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0130 20:39:37.168257   44923 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0130 20:39:37.168330   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0130 20:39:37.173865   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0130 20:39:39.259662   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.091304493s)
	I0130 20:39:39.259692   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0130 20:39:39.259719   44923 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0130 20:39:39.259777   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0130 20:39:40.291441   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:42.292550   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:39.861179   45819 retry.go:31] will retry after 1.194254513s: kubelet not initialised
	I0130 20:39:41.067315   45819 retry.go:31] will retry after 3.766341089s: kubelet not initialised
	I0130 20:39:42.016919   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:44.514681   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:43.804203   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.54440472s)
	I0130 20:39:43.804228   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0130 20:39:43.804262   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:43.804360   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:44.790577   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:46.791751   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:44.839501   45819 retry.go:31] will retry after 2.957753887s: kubelet not initialised
	I0130 20:39:47.802749   45819 retry.go:31] will retry after 4.750837771s: kubelet not initialised
	I0130 20:39:47.016112   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:49.517716   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:46.385349   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.580960989s)
	I0130 20:39:46.385378   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0130 20:39:46.385403   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 20:39:46.385446   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 20:39:48.570468   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.184994355s)
	I0130 20:39:48.570504   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0130 20:39:48.570529   44923 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0130 20:39:48.570575   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0130 20:39:49.318398   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0130 20:39:49.318449   44923 cache_images.go:123] Successfully loaded all cached images
	I0130 20:39:49.318457   44923 cache_images.go:92] LoadImages completed in 17.342728639s
	I0130 20:39:49.318542   44923 ssh_runner.go:195] Run: crio config
	I0130 20:39:49.393074   44923 cni.go:84] Creating CNI manager for ""
	I0130 20:39:49.393094   44923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:49.393116   44923 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:39:49.393143   44923 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.220 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-473743 NodeName:no-preload-473743 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:39:49.393301   44923 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-473743"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:39:49.393384   44923 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-473743 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-473743 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:39:49.393445   44923 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0130 20:39:49.403506   44923 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:39:49.403582   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:39:49.412473   44923 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0130 20:39:49.429600   44923 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0130 20:39:49.445613   44923 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0130 20:39:49.462906   44923 ssh_runner.go:195] Run: grep 192.168.50.220	control-plane.minikube.internal$ /etc/hosts
	I0130 20:39:49.466844   44923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:49.479363   44923 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743 for IP: 192.168.50.220
	I0130 20:39:49.479388   44923 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:39:49.479540   44923 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:39:49.479599   44923 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:39:49.479682   44923 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.key
	I0130 20:39:49.479766   44923 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/apiserver.key.ef9da43a
	I0130 20:39:49.479832   44923 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/proxy-client.key
	I0130 20:39:49.479984   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:39:49.480020   44923 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:39:49.480031   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:39:49.480052   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:39:49.480082   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:39:49.480104   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:39:49.480148   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:49.480782   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:39:49.504588   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 20:39:49.530340   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:39:49.552867   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 20:39:49.575974   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:39:49.598538   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:39:49.623489   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:39:49.646965   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:39:49.671998   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:39:49.695493   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:39:49.718975   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:39:49.741793   44923 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:39:49.758291   44923 ssh_runner.go:195] Run: openssl version
	I0130 20:39:49.765053   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:39:49.775428   44923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:39:49.780081   44923 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:39:49.780130   44923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:39:49.785510   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:39:49.797983   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:39:49.807934   44923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:39:49.812367   44923 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:39:49.812416   44923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:39:49.818021   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:39:49.827603   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:39:49.837248   44923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:49.841789   44923 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:49.841838   44923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:49.847684   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:39:49.857387   44923 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:39:49.862411   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:39:49.871862   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:39:49.877904   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:39:49.883820   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:39:49.890534   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:39:49.898143   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:39:49.905607   44923 kubeadm.go:404] StartCluster: {Name:no-preload-473743 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-473743 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.220 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:39:49.905713   44923 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:39:49.905768   44923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:49.956631   44923 cri.go:89] found id: ""
	I0130 20:39:49.956705   44923 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:39:49.967500   44923 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:39:49.967516   44923 kubeadm.go:636] restartCluster start
	I0130 20:39:49.967572   44923 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:39:49.977077   44923 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:49.978191   44923 kubeconfig.go:92] found "no-preload-473743" server: "https://192.168.50.220:8443"
	I0130 20:39:49.980732   44923 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:39:49.990334   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:49.990377   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:50.001427   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:50.491017   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:50.491080   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:50.503162   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:48.792438   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:51.290002   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:53.291511   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:52.558586   45819 retry.go:31] will retry after 13.209460747s: kubelet not initialised
	I0130 20:39:52.013659   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:54.013756   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:50.991212   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:50.991312   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:51.004155   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:51.491296   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:51.491369   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:51.502771   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:51.991398   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:51.991529   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:52.004164   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:52.490700   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:52.490817   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:52.504616   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:52.991009   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:52.991101   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:53.004178   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:53.490804   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:53.490897   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:53.502856   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:53.990345   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:53.990451   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:54.003812   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:54.491414   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:54.491522   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:54.502969   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:54.991126   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:54.991212   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:55.003001   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:55.490521   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:55.490609   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:55.501901   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:55.791198   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:58.289750   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:56.513098   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:58.514459   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:55.990820   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:55.990893   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:56.002224   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:56.490338   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:56.490432   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:56.502497   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:56.991097   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:56.991189   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:57.002115   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:57.490604   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:57.490681   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:57.501777   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:57.991320   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:57.991419   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:58.002136   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:58.490641   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:58.490724   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:58.502247   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:58.990830   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:58.990951   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:59.001469   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:59.491109   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:59.491223   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:59.502348   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:59.991097   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:59.991182   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:40:00.002945   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:40:00.002978   44923 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:40:00.002986   44923 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:40:00.002996   44923 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:40:00.003068   44923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:40:00.045168   44923 cri.go:89] found id: ""
	I0130 20:40:00.045245   44923 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:40:00.061704   44923 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:40:00.074448   44923 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:40:00.074505   44923 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:40:00.083478   44923 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:40:00.083502   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:00.200934   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:00.791680   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:02.791880   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:00.515342   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:02.515914   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:05.014585   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:00.824616   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:01.029317   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:01.146596   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:01.232362   44923 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:40:01.232439   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:01.733118   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:02.232964   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:02.732910   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:03.232934   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:03.732852   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:03.758730   44923 api_server.go:72] duration metric: took 2.526367424s to wait for apiserver process to appear ...
	I0130 20:40:03.758768   44923 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:40:03.758786   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:05.290228   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:07.290842   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:07.869847   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:40:07.869897   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:40:07.869919   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:07.986795   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:40:07.986841   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:40:08.259140   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:08.265487   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:40:08.265523   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:40:08.759024   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:08.764138   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:40:08.764163   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:40:09.259821   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:09.265120   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 200:
	ok
	I0130 20:40:09.275933   44923 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 20:40:09.275956   44923 api_server.go:131] duration metric: took 5.517181599s to wait for apiserver health ...
	I0130 20:40:09.275965   44923 cni.go:84] Creating CNI manager for ""
	I0130 20:40:09.275971   44923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:40:09.277688   44923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:40:05.773670   45819 retry.go:31] will retry after 17.341210204s: kubelet not initialised
	I0130 20:40:07.014677   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:09.516836   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:09.279058   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:40:09.307862   44923 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:40:09.339259   44923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:40:09.355136   44923 system_pods.go:59] 8 kube-system pods found
	I0130 20:40:09.355177   44923 system_pods.go:61] "coredns-76f75df574-d4c7t" [a8701b4d-0616-4c05-9ba0-0157adae2d13] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:40:09.355185   44923 system_pods.go:61] "etcd-no-preload-473743" [ed931ab3-95d8-4115-ae97-1c274ed8432d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 20:40:09.355194   44923 system_pods.go:61] "kube-apiserver-no-preload-473743" [64b9b17c-6df5-41db-a308-b0deba016c9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 20:40:09.355201   44923 system_pods.go:61] "kube-controller-manager-no-preload-473743" [a28d8dc6-244a-4dfa-9d7f-468281823332] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 20:40:09.355210   44923 system_pods.go:61] "kube-proxy-zklzt" [fa94d19c-b0d6-4e78-86e8-e6b5f3608753] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 20:40:09.355219   44923 system_pods.go:61] "kube-scheduler-no-preload-473743" [b8f8066b-8644-42c3-b47a-52e34210e410] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 20:40:09.355238   44923 system_pods.go:61] "metrics-server-57f55c9bc5-wzb2g" [cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:40:09.355249   44923 system_pods.go:61] "storage-provisioner" [a257b079-cb6e-45fd-b05d-9ad6fa26225e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:40:09.355256   44923 system_pods.go:74] duration metric: took 15.951624ms to wait for pod list to return data ...
	I0130 20:40:09.355277   44923 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:40:09.361985   44923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:40:09.362014   44923 node_conditions.go:123] node cpu capacity is 2
	I0130 20:40:09.362025   44923 node_conditions.go:105] duration metric: took 6.74245ms to run NodePressure ...
	I0130 20:40:09.362045   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:09.678111   44923 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:40:09.687808   44923 kubeadm.go:787] kubelet initialised
	I0130 20:40:09.687828   44923 kubeadm.go:788] duration metric: took 9.689086ms waiting for restarted kubelet to initialise ...
	I0130 20:40:09.687835   44923 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:09.694574   44923 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.700190   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "coredns-76f75df574-d4c7t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.700214   44923 pod_ready.go:81] duration metric: took 5.613522ms waiting for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.700230   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "coredns-76f75df574-d4c7t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.700237   44923 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.705513   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "etcd-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.705534   44923 pod_ready.go:81] duration metric: took 5.286859ms waiting for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.705545   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "etcd-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.705553   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.710360   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-apiserver-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.710378   44923 pod_ready.go:81] duration metric: took 4.814631ms waiting for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.710388   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-apiserver-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.710396   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.746412   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.746447   44923 pod_ready.go:81] duration metric: took 36.037006ms waiting for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.746460   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.746469   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:10.143330   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-proxy-zklzt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.143364   44923 pod_ready.go:81] duration metric: took 396.879081ms waiting for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:10.143377   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-proxy-zklzt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.143385   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:10.549132   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-scheduler-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.549171   44923 pod_ready.go:81] duration metric: took 405.77755ms waiting for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:10.549192   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-scheduler-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.549201   44923 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:10.942777   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.942802   44923 pod_ready.go:81] duration metric: took 393.589996ms waiting for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:10.942811   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.942817   44923 pod_ready.go:38] duration metric: took 1.254975084s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:10.942834   44923 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:40:10.954894   44923 ops.go:34] apiserver oom_adj: -16
	I0130 20:40:10.954916   44923 kubeadm.go:640] restartCluster took 20.987393757s
	I0130 20:40:10.954926   44923 kubeadm.go:406] StartCluster complete in 21.049328159s
	I0130 20:40:10.954944   44923 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:40:10.955025   44923 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:40:10.956906   44923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:40:10.957249   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:40:10.957343   44923 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:40:10.957411   44923 addons.go:69] Setting storage-provisioner=true in profile "no-preload-473743"
	I0130 20:40:10.957434   44923 addons.go:234] Setting addon storage-provisioner=true in "no-preload-473743"
	I0130 20:40:10.957440   44923 addons.go:69] Setting metrics-server=true in profile "no-preload-473743"
	I0130 20:40:10.957447   44923 config.go:182] Loaded profile config "no-preload-473743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	W0130 20:40:10.957451   44923 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:40:10.957471   44923 addons.go:234] Setting addon metrics-server=true in "no-preload-473743"
	W0130 20:40:10.957481   44923 addons.go:243] addon metrics-server should already be in state true
	I0130 20:40:10.957512   44923 host.go:66] Checking if "no-preload-473743" exists ...
	I0130 20:40:10.957522   44923 host.go:66] Checking if "no-preload-473743" exists ...
	I0130 20:40:10.957946   44923 addons.go:69] Setting default-storageclass=true in profile "no-preload-473743"
	I0130 20:40:10.957911   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.958230   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.958246   44923 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-473743"
	I0130 20:40:10.958477   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.958517   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.958600   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.958621   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.962458   44923 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-473743" context rescaled to 1 replicas
	I0130 20:40:10.962497   44923 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.220 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:40:10.964710   44923 out.go:177] * Verifying Kubernetes components...
	I0130 20:40:10.966259   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:40:10.975195   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45125
	I0130 20:40:10.975661   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.976231   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.976262   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.976885   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.977509   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.977542   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.978199   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37815
	I0130 20:40:10.978220   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35309
	I0130 20:40:10.979039   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.979106   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.979581   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.979600   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.979584   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.979663   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.979964   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.980074   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.980160   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:10.980655   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.980690   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.984068   44923 addons.go:234] Setting addon default-storageclass=true in "no-preload-473743"
	W0130 20:40:10.984119   44923 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:40:10.984155   44923 host.go:66] Checking if "no-preload-473743" exists ...
	I0130 20:40:10.984564   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.984615   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.997126   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44921
	I0130 20:40:10.997598   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.997990   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.998006   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.998355   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.998520   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:10.998838   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37151
	I0130 20:40:10.999186   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.999589   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.999604   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:11.000003   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:11.000289   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:11.000433   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:40:11.002723   44923 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:40:11.001789   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:40:11.004317   44923 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:40:11.004329   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:40:11.004345   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:40:11.005791   44923 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:40:11.007234   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:40:11.007246   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:40:11.007259   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:40:11.006415   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I0130 20:40:11.007375   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.007826   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:11.008219   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:40:11.008258   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.008405   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:40:11.008550   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:11.008566   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:11.008624   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:40:11.008780   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:40:11.008900   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:11.008904   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:40:11.009548   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:11.009578   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:11.010414   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.010713   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:40:11.010733   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.010938   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:40:11.011137   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:40:11.011308   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:40:11.011424   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:40:11.047889   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44097
	I0130 20:40:11.048317   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:11.048800   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:11.048820   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:11.049210   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:11.049451   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:11.051439   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:40:11.052012   44923 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:40:11.052030   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:40:11.052049   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:40:11.055336   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.055865   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:40:11.055888   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.055976   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:40:11.056175   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:40:11.056344   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:40:11.056461   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:40:11.176670   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:40:11.176694   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:40:11.182136   44923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:40:11.194238   44923 node_ready.go:35] waiting up to 6m0s for node "no-preload-473743" to be "Ready" ...
	I0130 20:40:11.194301   44923 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0130 20:40:11.213877   44923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:40:11.222566   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:40:11.222591   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:40:11.264089   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:40:11.264119   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:40:11.337758   44923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:40:12.237415   44923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.055244284s)
	I0130 20:40:12.237483   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.237482   44923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.023570997s)
	I0130 20:40:12.237504   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.237521   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.237538   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.237867   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.237927   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.237949   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.237964   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.237973   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.237986   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.238018   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.238030   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.238303   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.238319   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.238415   44923 main.go:141] libmachine: (no-preload-473743) DBG | Closing plugin on server side
	I0130 20:40:12.238473   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.238485   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.245407   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.245432   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.245653   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.245670   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.287632   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.287660   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.287973   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.287998   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.288000   44923 main.go:141] libmachine: (no-preload-473743) DBG | Closing plugin on server side
	I0130 20:40:12.288014   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.288024   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.288266   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.288286   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.288297   44923 addons.go:470] Verifying addon metrics-server=true in "no-preload-473743"
	I0130 20:40:12.288352   44923 main.go:141] libmachine: (no-preload-473743) DBG | Closing plugin on server side
	I0130 20:40:12.290298   44923 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 20:40:09.291762   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:11.791994   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:12.016265   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:14.515097   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:12.291867   44923 addons.go:505] enable addons completed in 1.334521495s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 20:40:13.200767   44923 node_ready.go:58] node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:15.699345   44923 node_ready.go:58] node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:14.291583   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:16.292248   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:17.014332   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:19.014556   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:18.198624   44923 node_ready.go:58] node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:18.699015   44923 node_ready.go:49] node "no-preload-473743" has status "Ready":"True"
	I0130 20:40:18.699041   44923 node_ready.go:38] duration metric: took 7.504770144s waiting for node "no-preload-473743" to be "Ready" ...
	I0130 20:40:18.699050   44923 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:18.709647   44923 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.718022   44923 pod_ready.go:92] pod "coredns-76f75df574-d4c7t" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:18.718046   44923 pod_ready.go:81] duration metric: took 8.370541ms waiting for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.718054   44923 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.722992   44923 pod_ready.go:92] pod "etcd-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:18.723012   44923 pod_ready.go:81] duration metric: took 4.951762ms waiting for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.723020   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:20.732288   44923 pod_ready.go:102] pod "kube-apiserver-no-preload-473743" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:18.791445   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:21.290205   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:23.123817   45819 kubeadm.go:787] kubelet initialised
	I0130 20:40:23.123842   45819 kubeadm.go:788] duration metric: took 46.643164333s waiting for restarted kubelet to initialise ...
	I0130 20:40:23.123849   45819 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:23.128282   45819 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-547k4" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.132665   45819 pod_ready.go:92] pod "coredns-5644d7b6d9-547k4" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.132688   45819 pod_ready.go:81] duration metric: took 4.375362ms waiting for pod "coredns-5644d7b6d9-547k4" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.132701   45819 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-dtfzh" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.137072   45819 pod_ready.go:92] pod "coredns-5644d7b6d9-dtfzh" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.137092   45819 pod_ready.go:81] duration metric: took 4.379467ms waiting for pod "coredns-5644d7b6d9-dtfzh" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.137102   45819 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.142038   45819 pod_ready.go:92] pod "etcd-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.142058   45819 pod_ready.go:81] duration metric: took 4.949104ms waiting for pod "etcd-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.142070   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.146657   45819 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.146676   45819 pod_ready.go:81] duration metric: took 4.598238ms waiting for pod "kube-apiserver-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.146686   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.518159   45819 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.518189   45819 pod_ready.go:81] duration metric: took 371.488133ms waiting for pod "kube-controller-manager-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.518203   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ncl7z" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.919594   45819 pod_ready.go:92] pod "kube-proxy-ncl7z" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.919628   45819 pod_ready.go:81] duration metric: took 401.417322ms waiting for pod "kube-proxy-ncl7z" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.919644   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:24.318125   45819 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:24.318152   45819 pod_ready.go:81] duration metric: took 398.499457ms waiting for pod "kube-scheduler-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:24.318166   45819 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.513600   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:23.514060   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:21.233466   44923 pod_ready.go:92] pod "kube-apiserver-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:21.233494   44923 pod_ready.go:81] duration metric: took 2.510466903s waiting for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.233507   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.240688   44923 pod_ready.go:92] pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:21.240709   44923 pod_ready.go:81] duration metric: took 7.194165ms waiting for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.240721   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.248250   44923 pod_ready.go:92] pod "kube-proxy-zklzt" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:21.248271   44923 pod_ready.go:81] duration metric: took 7.542304ms waiting for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.248278   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.256673   44923 pod_ready.go:92] pod "kube-scheduler-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.256700   44923 pod_ready.go:81] duration metric: took 2.008414366s waiting for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.256712   44923 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:25.263480   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:23.790334   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:26.290232   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:28.292270   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:26.324649   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:28.825120   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:26.016305   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:28.513650   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:27.264434   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:29.764240   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:30.793210   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:33.292255   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:31.326850   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:33.824698   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:30.514448   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:32.518435   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:35.013676   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:32.264144   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:34.763689   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:35.789964   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:37.791095   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:35.825018   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:38.326094   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:37.014222   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:39.517868   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:37.265137   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:39.764115   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:40.290332   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.290850   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:40.327135   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.824370   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.014917   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.516872   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.264387   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.265504   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.291131   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:46.790512   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.827108   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:47.327816   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:46.518922   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:49.014136   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:46.765151   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:49.265178   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:48.790952   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:51.291730   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:49.824442   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:52.325401   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:51.014513   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:53.518388   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:51.266567   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:53.764501   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:53.789915   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:55.790332   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:57.791445   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:54.825612   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:57.324364   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:59.327308   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:56.020804   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:58.515544   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:56.263707   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:58.264200   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:00.264261   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:59.792066   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:02.289879   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:01.824631   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:03.824749   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:01.014649   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:03.014805   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:05.017318   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:02.763825   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:04.764040   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:04.290927   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:06.791853   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:06.326570   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:08.824889   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:07.516190   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:10.018532   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:06.765257   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:09.263466   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:09.290744   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:11.791416   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:10.825025   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:13.324947   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:12.514850   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:14.522700   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:11.263911   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:13.763429   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:15.766371   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:14.289786   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:16.291753   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:15.325297   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:17.824762   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:17.014087   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:19.518139   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:18.263727   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:20.263854   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:18.791517   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:21.292155   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:19.825751   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:22.324733   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:21.518205   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:24.015562   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:22.767815   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:25.263283   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:23.790847   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:26.290464   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:24.824063   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:26.825938   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:29.325683   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:26.016724   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:28.514670   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:27.264429   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:29.264577   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:28.791861   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:31.291558   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:31.824367   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.824771   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:30.515432   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.014091   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:31.265902   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.764211   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:35.764788   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.791968   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:36.290991   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:38.291383   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:35.824891   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:37.825500   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:35.514120   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:37.514579   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:39.516165   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:37.765006   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:40.263816   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:40.791224   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.792487   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:40.326148   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.825282   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.014531   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:44.514337   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.264845   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:44.764275   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:45.290370   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:47.790557   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:45.325184   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:47.825091   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:46.515035   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:49.013829   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:47.263752   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:49.263882   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:49.790715   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:52.291348   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:50.326963   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:52.825278   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:51.014381   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:53.016755   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:51.264167   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:53.264888   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:55.265000   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:54.291846   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:56.790351   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:55.325156   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:57.325446   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:59.326114   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:55.515866   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:58.013768   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:00.014052   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:57.763548   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:59.764374   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:58.790584   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:01.294420   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:01.827046   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:04.325425   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:02.514100   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:04.516981   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:02.264420   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:04.264851   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:03.790918   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:06.290560   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:08.291334   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:06.824232   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:08.824527   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:07.014375   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:09.513980   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:06.764222   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:09.264299   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:10.292477   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:12.795626   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:10.825706   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:13.325572   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:11.514369   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:14.016090   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:11.264881   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:13.763625   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:15.764616   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:15.290292   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:17.790263   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:15.326185   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:17.826504   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:16.518263   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:19.014219   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:18.265723   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:20.764663   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:19.792068   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:22.292221   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:20.325069   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:22.326307   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:21.014811   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:23.014876   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:25.017016   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:23.264098   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:25.267065   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:24.791616   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:27.291739   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:24.825416   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:26.826380   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:29.325717   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:27.513692   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:30.015246   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:27.763938   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:29.764135   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:29.789997   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:31.790272   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:31.825466   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:33.826959   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:32.513718   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:35.014948   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:31.780185   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:34.265062   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:33.790477   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:36.290139   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:38.291801   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:36.325475   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:38.825210   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:37.513778   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:39.518155   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:36.764137   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:38.765005   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:40.790050   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:42.791739   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:41.325239   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:43.826300   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:42.013844   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:44.014396   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:41.268687   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:43.765101   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:45.290120   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:47.291365   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:46.325321   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:48.824944   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:46.015721   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:48.514689   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:46.269498   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:48.763780   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:50.765289   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:49.790212   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:52.291090   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:51.324622   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:53.324873   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:51.015934   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:53.016171   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:52.765777   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:55.264419   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:54.292666   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:56.790098   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:55.825230   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:58.324546   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:55.514240   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:58.014796   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:57.764094   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:59.764594   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:58.790445   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:00.790844   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:03.290632   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:00.325916   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:02.824174   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:00.514203   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:02.515317   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:05.018840   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:01.767672   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:04.263736   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:04.290221   45037 pod_ready.go:81] duration metric: took 4m0.006974938s waiting for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	E0130 20:43:04.290244   45037 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 20:43:04.290252   45037 pod_ready.go:38] duration metric: took 4m4.550384705s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:43:04.290265   45037 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:43:04.290289   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:43:04.290330   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:43:04.354567   45037 cri.go:89] found id: "f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:04.354594   45037 cri.go:89] found id: ""
	I0130 20:43:04.354603   45037 logs.go:276] 1 containers: [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d]
	I0130 20:43:04.354664   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.359890   45037 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:43:04.359961   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:43:04.399415   45037 cri.go:89] found id: "0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:04.399437   45037 cri.go:89] found id: ""
	I0130 20:43:04.399444   45037 logs.go:276] 1 containers: [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18]
	I0130 20:43:04.399484   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.404186   45037 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:43:04.404241   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:43:04.445968   45037 cri.go:89] found id: "4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:04.445994   45037 cri.go:89] found id: ""
	I0130 20:43:04.446003   45037 logs.go:276] 1 containers: [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d]
	I0130 20:43:04.446060   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.450215   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:43:04.450285   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:43:04.492438   45037 cri.go:89] found id: "74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:04.492462   45037 cri.go:89] found id: ""
	I0130 20:43:04.492476   45037 logs.go:276] 1 containers: [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f]
	I0130 20:43:04.492537   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.497227   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:43:04.497301   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:43:04.535936   45037 cri.go:89] found id: "cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:04.535960   45037 cri.go:89] found id: ""
	I0130 20:43:04.535970   45037 logs.go:276] 1 containers: [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254]
	I0130 20:43:04.536026   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.540968   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:43:04.541046   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:43:04.584192   45037 cri.go:89] found id: "b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:04.584214   45037 cri.go:89] found id: ""
	I0130 20:43:04.584222   45037 logs.go:276] 1 containers: [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2]
	I0130 20:43:04.584280   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.588842   45037 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:43:04.588914   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:43:04.630957   45037 cri.go:89] found id: ""
	I0130 20:43:04.630984   45037 logs.go:276] 0 containers: []
	W0130 20:43:04.630994   45037 logs.go:278] No container was found matching "kindnet"
	I0130 20:43:04.631000   45037 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:43:04.631057   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:43:04.672712   45037 cri.go:89] found id: "84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:04.672741   45037 cri.go:89] found id: "5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:04.672747   45037 cri.go:89] found id: ""
	I0130 20:43:04.672757   45037 logs.go:276] 2 containers: [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5]
	I0130 20:43:04.672830   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.677537   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.681806   45037 logs.go:123] Gathering logs for kube-scheduler [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f] ...
	I0130 20:43:04.681833   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:04.743389   45037 logs.go:123] Gathering logs for kube-proxy [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254] ...
	I0130 20:43:04.743420   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:04.783857   45037 logs.go:123] Gathering logs for etcd [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18] ...
	I0130 20:43:04.783891   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:04.838800   45037 logs.go:123] Gathering logs for container status ...
	I0130 20:43:04.838827   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:43:04.897337   45037 logs.go:123] Gathering logs for kubelet ...
	I0130 20:43:04.897361   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:43:04.954337   45037 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:43:04.954367   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:43:05.110447   45037 logs.go:123] Gathering logs for kube-controller-manager [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2] ...
	I0130 20:43:05.110476   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:05.169238   45037 logs.go:123] Gathering logs for storage-provisioner [5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5] ...
	I0130 20:43:05.169275   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:05.209860   45037 logs.go:123] Gathering logs for dmesg ...
	I0130 20:43:05.209890   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:43:05.224272   45037 logs.go:123] Gathering logs for coredns [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d] ...
	I0130 20:43:05.224296   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:05.264818   45037 logs.go:123] Gathering logs for storage-provisioner [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac] ...
	I0130 20:43:05.264857   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:05.304626   45037 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:43:05.304657   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:43:05.748336   45037 logs.go:123] Gathering logs for kube-apiserver [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d] ...
	I0130 20:43:05.748377   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:08.306639   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:43:08.324001   45037 api_server.go:72] duration metric: took 4m16.400279002s to wait for apiserver process to appear ...
	I0130 20:43:08.324028   45037 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:43:08.324061   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:43:08.324111   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:43:08.364000   45037 cri.go:89] found id: "f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:08.364026   45037 cri.go:89] found id: ""
	I0130 20:43:08.364036   45037 logs.go:276] 1 containers: [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d]
	I0130 20:43:08.364093   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.368770   45037 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:43:08.368843   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:43:08.411371   45037 cri.go:89] found id: "0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:08.411394   45037 cri.go:89] found id: ""
	I0130 20:43:08.411404   45037 logs.go:276] 1 containers: [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18]
	I0130 20:43:08.411462   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.415582   45037 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:43:08.415648   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:43:08.455571   45037 cri.go:89] found id: "4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:08.455601   45037 cri.go:89] found id: ""
	I0130 20:43:08.455612   45037 logs.go:276] 1 containers: [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d]
	I0130 20:43:08.455662   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.459908   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:43:08.459972   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:43:08.497350   45037 cri.go:89] found id: "74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:08.497374   45037 cri.go:89] found id: ""
	I0130 20:43:08.497383   45037 logs.go:276] 1 containers: [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f]
	I0130 20:43:08.497441   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.501504   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:43:08.501552   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:43:08.550031   45037 cri.go:89] found id: "cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:08.550057   45037 cri.go:89] found id: ""
	I0130 20:43:08.550066   45037 logs.go:276] 1 containers: [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254]
	I0130 20:43:08.550181   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.555166   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:43:08.555215   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:43:08.590903   45037 cri.go:89] found id: "b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:08.590929   45037 cri.go:89] found id: ""
	I0130 20:43:08.590939   45037 logs.go:276] 1 containers: [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2]
	I0130 20:43:08.590997   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.594837   45037 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:43:08.594888   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:43:08.630989   45037 cri.go:89] found id: ""
	I0130 20:43:08.631015   45037 logs.go:276] 0 containers: []
	W0130 20:43:08.631024   45037 logs.go:278] No container was found matching "kindnet"
	I0130 20:43:08.631029   45037 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:43:08.631072   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:43:08.669579   45037 cri.go:89] found id: "84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:08.669603   45037 cri.go:89] found id: "5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:08.669609   45037 cri.go:89] found id: ""
	I0130 20:43:08.669617   45037 logs.go:276] 2 containers: [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5]
	I0130 20:43:08.669666   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.673938   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.677733   45037 logs.go:123] Gathering logs for kubelet ...
	I0130 20:43:08.677757   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:43:08.726492   45037 logs.go:123] Gathering logs for dmesg ...
	I0130 20:43:08.726519   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:43:04.825623   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:07.331997   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:07.514074   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:09.514484   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:06.264040   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:08.264505   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:10.764072   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:08.740624   45037 logs.go:123] Gathering logs for kube-controller-manager [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2] ...
	I0130 20:43:08.740645   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:08.792517   45037 logs.go:123] Gathering logs for kube-scheduler [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f] ...
	I0130 20:43:08.792547   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:08.829131   45037 logs.go:123] Gathering logs for storage-provisioner [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac] ...
	I0130 20:43:08.829166   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:08.870777   45037 logs.go:123] Gathering logs for storage-provisioner [5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5] ...
	I0130 20:43:08.870802   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:08.909648   45037 logs.go:123] Gathering logs for kube-apiserver [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d] ...
	I0130 20:43:08.909678   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:08.953671   45037 logs.go:123] Gathering logs for coredns [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d] ...
	I0130 20:43:08.953701   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:08.989624   45037 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:43:08.989648   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:43:09.383141   45037 logs.go:123] Gathering logs for container status ...
	I0130 20:43:09.383174   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:43:09.442685   45037 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:43:09.442719   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:43:09.563370   45037 logs.go:123] Gathering logs for etcd [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18] ...
	I0130 20:43:09.563398   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:09.614390   45037 logs.go:123] Gathering logs for kube-proxy [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254] ...
	I0130 20:43:09.614422   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:12.156906   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:43:12.161980   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 200:
	ok
	I0130 20:43:12.163284   45037 api_server.go:141] control plane version: v1.28.4
	I0130 20:43:12.163308   45037 api_server.go:131] duration metric: took 3.839271753s to wait for apiserver health ...
	I0130 20:43:12.163318   45037 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:43:12.163343   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:43:12.163389   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:43:12.207351   45037 cri.go:89] found id: "f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:12.207372   45037 cri.go:89] found id: ""
	I0130 20:43:12.207381   45037 logs.go:276] 1 containers: [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d]
	I0130 20:43:12.207436   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.213923   45037 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:43:12.213987   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:43:12.263647   45037 cri.go:89] found id: "0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:12.263680   45037 cri.go:89] found id: ""
	I0130 20:43:12.263690   45037 logs.go:276] 1 containers: [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18]
	I0130 20:43:12.263743   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.268327   45037 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:43:12.268381   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:43:12.310594   45037 cri.go:89] found id: "4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:12.310614   45037 cri.go:89] found id: ""
	I0130 20:43:12.310622   45037 logs.go:276] 1 containers: [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d]
	I0130 20:43:12.310670   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.315134   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:43:12.315185   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:43:12.359384   45037 cri.go:89] found id: "74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:12.359404   45037 cri.go:89] found id: ""
	I0130 20:43:12.359412   45037 logs.go:276] 1 containers: [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f]
	I0130 20:43:12.359468   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.363796   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:43:12.363856   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:43:12.399741   45037 cri.go:89] found id: "cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:12.399771   45037 cri.go:89] found id: ""
	I0130 20:43:12.399783   45037 logs.go:276] 1 containers: [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254]
	I0130 20:43:12.399844   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.404237   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:43:12.404302   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:43:12.457772   45037 cri.go:89] found id: "b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:12.457806   45037 cri.go:89] found id: ""
	I0130 20:43:12.457816   45037 logs.go:276] 1 containers: [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2]
	I0130 20:43:12.457876   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.462316   45037 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:43:12.462378   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:43:12.499660   45037 cri.go:89] found id: ""
	I0130 20:43:12.499690   45037 logs.go:276] 0 containers: []
	W0130 20:43:12.499699   45037 logs.go:278] No container was found matching "kindnet"
	I0130 20:43:12.499707   45037 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:43:12.499763   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:43:12.548931   45037 cri.go:89] found id: "84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:12.548961   45037 cri.go:89] found id: "5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:12.548969   45037 cri.go:89] found id: ""
	I0130 20:43:12.548978   45037 logs.go:276] 2 containers: [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5]
	I0130 20:43:12.549037   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.552983   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.557322   45037 logs.go:123] Gathering logs for container status ...
	I0130 20:43:12.557340   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:43:12.599784   45037 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:43:12.599812   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:43:12.716124   45037 logs.go:123] Gathering logs for kube-apiserver [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d] ...
	I0130 20:43:12.716156   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:12.766940   45037 logs.go:123] Gathering logs for storage-provisioner [5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5] ...
	I0130 20:43:12.766980   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:12.804026   45037 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:43:12.804059   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:43:13.165109   45037 logs.go:123] Gathering logs for coredns [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d] ...
	I0130 20:43:13.165153   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:13.204652   45037 logs.go:123] Gathering logs for kube-scheduler [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f] ...
	I0130 20:43:13.204679   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:13.242644   45037 logs.go:123] Gathering logs for storage-provisioner [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac] ...
	I0130 20:43:13.242675   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:13.282527   45037 logs.go:123] Gathering logs for kubelet ...
	I0130 20:43:13.282558   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:43:13.335128   45037 logs.go:123] Gathering logs for etcd [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18] ...
	I0130 20:43:13.335156   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:13.385564   45037 logs.go:123] Gathering logs for kube-controller-manager [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2] ...
	I0130 20:43:13.385599   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:13.449564   45037 logs.go:123] Gathering logs for dmesg ...
	I0130 20:43:13.449603   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:43:13.464376   45037 logs.go:123] Gathering logs for kube-proxy [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254] ...
	I0130 20:43:13.464406   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:09.825882   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:11.827628   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:14.325309   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:12.012894   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:14.014496   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:12.765167   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:14.765356   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:16.017083   45037 system_pods.go:59] 8 kube-system pods found
	I0130 20:43:16.017121   45037 system_pods.go:61] "coredns-5dd5756b68-jqzzv" [59f362b6-606e-4bcd-b5eb-c8822aaf8b9c] Running
	I0130 20:43:16.017128   45037 system_pods.go:61] "etcd-embed-certs-208583" [798094bf-2aac-4f39-afc1-4f873bdd08ee] Running
	I0130 20:43:16.017135   45037 system_pods.go:61] "kube-apiserver-embed-certs-208583" [b96b9f6e-b36a-47bf-8f6d-01f883501766] Running
	I0130 20:43:16.017141   45037 system_pods.go:61] "kube-controller-manager-embed-certs-208583" [3dbd9e29-5c95-40f5-acd8-9767f6ce7a03] Running
	I0130 20:43:16.017148   45037 system_pods.go:61] "kube-proxy-g7q5t" [47f109e0-7a56-472f-8c7e-ba2b138de352] Running
	I0130 20:43:16.017154   45037 system_pods.go:61] "kube-scheduler-embed-certs-208583" [e8a37eb1-599f-478f-bbc1-b44b1020f291] Running
	I0130 20:43:16.017165   45037 system_pods.go:61] "metrics-server-57f55c9bc5-ghg9n" [37700115-83e9-440a-b396-56f50adb6311] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:43:16.017172   45037 system_pods.go:61] "storage-provisioner" [15108916-a630-4208-99f7-5706db407b22] Running
	I0130 20:43:16.017185   45037 system_pods.go:74] duration metric: took 3.853859786s to wait for pod list to return data ...
	I0130 20:43:16.017198   45037 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:43:16.019949   45037 default_sa.go:45] found service account: "default"
	I0130 20:43:16.019967   45037 default_sa.go:55] duration metric: took 2.760881ms for default service account to be created ...
	I0130 20:43:16.019976   45037 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:43:16.025198   45037 system_pods.go:86] 8 kube-system pods found
	I0130 20:43:16.025219   45037 system_pods.go:89] "coredns-5dd5756b68-jqzzv" [59f362b6-606e-4bcd-b5eb-c8822aaf8b9c] Running
	I0130 20:43:16.025225   45037 system_pods.go:89] "etcd-embed-certs-208583" [798094bf-2aac-4f39-afc1-4f873bdd08ee] Running
	I0130 20:43:16.025229   45037 system_pods.go:89] "kube-apiserver-embed-certs-208583" [b96b9f6e-b36a-47bf-8f6d-01f883501766] Running
	I0130 20:43:16.025234   45037 system_pods.go:89] "kube-controller-manager-embed-certs-208583" [3dbd9e29-5c95-40f5-acd8-9767f6ce7a03] Running
	I0130 20:43:16.025238   45037 system_pods.go:89] "kube-proxy-g7q5t" [47f109e0-7a56-472f-8c7e-ba2b138de352] Running
	I0130 20:43:16.025242   45037 system_pods.go:89] "kube-scheduler-embed-certs-208583" [e8a37eb1-599f-478f-bbc1-b44b1020f291] Running
	I0130 20:43:16.025248   45037 system_pods.go:89] "metrics-server-57f55c9bc5-ghg9n" [37700115-83e9-440a-b396-56f50adb6311] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:43:16.025258   45037 system_pods.go:89] "storage-provisioner" [15108916-a630-4208-99f7-5706db407b22] Running
	I0130 20:43:16.025264   45037 system_pods.go:126] duration metric: took 5.282813ms to wait for k8s-apps to be running ...
	I0130 20:43:16.025270   45037 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:43:16.025309   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:43:16.043415   45037 system_svc.go:56] duration metric: took 18.134458ms WaitForService to wait for kubelet.
	I0130 20:43:16.043443   45037 kubeadm.go:581] duration metric: took 4m24.119724167s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:43:16.043472   45037 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:43:16.046999   45037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:43:16.047021   45037 node_conditions.go:123] node cpu capacity is 2
	I0130 20:43:16.047035   45037 node_conditions.go:105] duration metric: took 3.556321ms to run NodePressure ...
	I0130 20:43:16.047048   45037 start.go:228] waiting for startup goroutines ...
	I0130 20:43:16.047061   45037 start.go:233] waiting for cluster config update ...
	I0130 20:43:16.047078   45037 start.go:242] writing updated cluster config ...
	I0130 20:43:16.047368   45037 ssh_runner.go:195] Run: rm -f paused
	I0130 20:43:16.098760   45037 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 20:43:16.100739   45037 out.go:177] * Done! kubectl is now configured to use "embed-certs-208583" cluster and "default" namespace by default
	I0130 20:43:16.326450   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:18.824456   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:16.514335   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:19.014528   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:17.264059   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:19.264543   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:20.824649   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:23.324731   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:21.014634   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:23.513609   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:21.763771   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:23.764216   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:25.325575   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:27.825708   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:25.514335   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:27.506991   45441 pod_ready.go:81] duration metric: took 4m0.000368672s waiting for pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace to be "Ready" ...
	E0130 20:43:27.507020   45441 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 20:43:27.507037   45441 pod_ready.go:38] duration metric: took 4m11.059827725s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:43:27.507060   45441 kubeadm.go:640] restartCluster took 4m33.680532974s
	W0130 20:43:27.507128   45441 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 20:43:27.507159   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 20:43:26.264077   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:28.264502   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:30.764952   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:30.325157   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:32.325570   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:32.766530   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:35.264541   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:34.825545   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:36.825757   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:38.825922   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:37.764613   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:39.772391   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:41.253066   45441 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.745883202s)
	I0130 20:43:41.253138   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:43:41.267139   45441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:43:41.276814   45441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:43:41.286633   45441 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:43:41.286678   45441 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 20:43:41.340190   45441 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0130 20:43:41.340255   45441 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 20:43:41.491373   45441 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 20:43:41.491524   45441 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 20:43:41.491644   45441 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 20:43:41.735659   45441 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 20:43:41.737663   45441 out.go:204]   - Generating certificates and keys ...
	I0130 20:43:41.737778   45441 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 20:43:41.737875   45441 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 20:43:41.737961   45441 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 20:43:41.738034   45441 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 20:43:41.738116   45441 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 20:43:41.738215   45441 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 20:43:41.738295   45441 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 20:43:41.738381   45441 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 20:43:41.738481   45441 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 20:43:41.738542   45441 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 20:43:41.738578   45441 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 20:43:41.738633   45441 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 20:43:41.894828   45441 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 20:43:42.122408   45441 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 20:43:42.406185   45441 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 20:43:42.526794   45441 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 20:43:42.527473   45441 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 20:43:42.529906   45441 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 20:43:40.826403   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:43.324650   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:42.531956   45441 out.go:204]   - Booting up control plane ...
	I0130 20:43:42.532077   45441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 20:43:42.532175   45441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 20:43:42.532276   45441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 20:43:42.550440   45441 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 20:43:42.551432   45441 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 20:43:42.551515   45441 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 20:43:42.666449   45441 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 20:43:42.265430   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:44.268768   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:45.325121   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:47.325585   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:46.768728   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:49.264313   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:50.670814   45441 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004172 seconds
	I0130 20:43:50.670940   45441 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 20:43:50.693878   45441 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 20:43:51.228257   45441 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 20:43:51.228498   45441 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-877742 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 20:43:51.743336   45441 kubeadm.go:322] [bootstrap-token] Using token: hhyk9t.fiwckj4dbaljm18s
	I0130 20:43:51.744898   45441 out.go:204]   - Configuring RBAC rules ...
	I0130 20:43:51.744996   45441 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 20:43:51.755911   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 20:43:51.769124   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 20:43:51.773192   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 20:43:51.776643   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 20:43:51.780261   45441 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 20:43:51.807541   45441 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 20:43:52.070376   45441 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 20:43:52.167958   45441 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 20:43:52.167994   45441 kubeadm.go:322] 
	I0130 20:43:52.168050   45441 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 20:43:52.168061   45441 kubeadm.go:322] 
	I0130 20:43:52.168142   45441 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 20:43:52.168157   45441 kubeadm.go:322] 
	I0130 20:43:52.168193   45441 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 20:43:52.168254   45441 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 20:43:52.168325   45441 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 20:43:52.168336   45441 kubeadm.go:322] 
	I0130 20:43:52.168399   45441 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 20:43:52.168409   45441 kubeadm.go:322] 
	I0130 20:43:52.168469   45441 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 20:43:52.168480   45441 kubeadm.go:322] 
	I0130 20:43:52.168546   45441 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 20:43:52.168639   45441 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 20:43:52.168731   45441 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 20:43:52.168741   45441 kubeadm.go:322] 
	I0130 20:43:52.168834   45441 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 20:43:52.168928   45441 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 20:43:52.168938   45441 kubeadm.go:322] 
	I0130 20:43:52.169033   45441 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token hhyk9t.fiwckj4dbaljm18s \
	I0130 20:43:52.169145   45441 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 \
	I0130 20:43:52.169175   45441 kubeadm.go:322] 	--control-plane 
	I0130 20:43:52.169185   45441 kubeadm.go:322] 
	I0130 20:43:52.169274   45441 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 20:43:52.169283   45441 kubeadm.go:322] 
	I0130 20:43:52.169374   45441 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token hhyk9t.fiwckj4dbaljm18s \
	I0130 20:43:52.169485   45441 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 
	I0130 20:43:52.170103   45441 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 20:43:52.170128   45441 cni.go:84] Creating CNI manager for ""
	I0130 20:43:52.170138   45441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:43:52.171736   45441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:43:49.827004   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:51.828301   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:54.324951   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:52.173096   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:43:52.207763   45441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:43:52.239391   45441 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:43:52.239528   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:52.239550   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218 minikube.k8s.io/name=default-k8s-diff-port-877742 minikube.k8s.io/updated_at=2024_01_30T20_43_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:52.359837   45441 ops.go:34] apiserver oom_adj: -16
	I0130 20:43:52.622616   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:53.123165   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:53.622655   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:54.122819   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:54.623579   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:55.122784   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:51.265017   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:53.765449   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:56.826059   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:59.324992   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:55.622980   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:56.123436   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:56.623691   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:57.122685   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:57.623150   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:58.123358   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:58.623234   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:59.122804   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:59.623408   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:00.122730   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:56.264593   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:58.764827   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:00.765740   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:01.325185   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:03.325582   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:00.622649   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:01.123007   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:01.623488   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:02.123117   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:02.623186   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:03.122987   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:03.623625   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:04.123576   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:04.623493   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:05.123073   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:05.292330   45441 kubeadm.go:1088] duration metric: took 13.052870929s to wait for elevateKubeSystemPrivileges.
	I0130 20:44:05.292359   45441 kubeadm.go:406] StartCluster complete in 5m11.519002976s
	I0130 20:44:05.292376   45441 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:05.292446   45441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:44:05.294511   45441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:05.296490   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:44:05.296705   45441 config.go:182] Loaded profile config "default-k8s-diff-port-877742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:44:05.296739   45441 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:44:05.296797   45441 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-877742"
	I0130 20:44:05.296814   45441 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-877742"
	W0130 20:44:05.296823   45441 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:44:05.296872   45441 host.go:66] Checking if "default-k8s-diff-port-877742" exists ...
	I0130 20:44:05.297028   45441 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-877742"
	I0130 20:44:05.297068   45441 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-877742"
	I0130 20:44:05.297257   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.297282   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.297449   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.297476   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.297476   45441 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-877742"
	I0130 20:44:05.297498   45441 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-877742"
	W0130 20:44:05.297512   45441 addons.go:243] addon metrics-server should already be in state true
	I0130 20:44:05.297557   45441 host.go:66] Checking if "default-k8s-diff-port-877742" exists ...
	I0130 20:44:05.297934   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.297972   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.314618   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0130 20:44:05.314913   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34557
	I0130 20:44:05.315139   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.315638   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.315718   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.315751   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.316139   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.316295   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.316318   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.316342   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39221
	I0130 20:44:05.316649   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.316695   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.316729   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.316842   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.317131   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.317573   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.317598   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.317967   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.318507   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.318539   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.321078   45441 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-877742"
	W0130 20:44:05.321104   45441 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:44:05.321129   45441 host.go:66] Checking if "default-k8s-diff-port-877742" exists ...
	I0130 20:44:05.321503   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.321530   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.338144   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33785
	I0130 20:44:05.338180   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I0130 20:44:05.338717   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.338798   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.339318   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.339325   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.339343   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.339345   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.339804   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.339819   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.339987   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.340017   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.340889   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33925
	I0130 20:44:05.341348   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.341847   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.341870   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.342243   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:44:05.342328   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:44:05.344137   45441 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:44:05.342641   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.344745   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.345833   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:44:05.345871   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:44:05.345889   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:44:05.345936   45441 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:44:05.347567   45441 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:05.347585   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:44:05.347602   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:44:05.346048   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.348959   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.349635   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:44:05.349686   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.349853   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:44:05.350119   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:44:05.350404   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:44:05.350619   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:44:05.351435   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.351548   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:44:05.351565   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.351753   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:44:05.351924   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:44:05.352094   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:44:05.352237   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:44:05.366786   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40645
	I0130 20:44:05.367211   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.367744   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.367768   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.368174   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.368435   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.370411   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:44:05.370688   45441 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:05.370707   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:44:05.370726   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:44:05.375681   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:44:05.375726   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.375758   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:44:05.375778   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.375938   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:44:05.376136   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:44:05.376324   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:44:03.263112   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:05.264610   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:05.536173   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 20:44:05.547763   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:44:05.547783   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:44:05.561439   45441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:05.589801   45441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:05.619036   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:44:05.619063   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:44:05.672972   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:05.672993   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:44:05.753214   45441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:05.861799   45441 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-877742" context rescaled to 1 replicas
	I0130 20:44:05.861852   45441 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.52 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:44:05.863602   45441 out.go:177] * Verifying Kubernetes components...
	I0130 20:44:05.864716   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:07.418910   45441 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.882691784s)
	I0130 20:44:07.418945   45441 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0130 20:44:07.960063   45441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.370223433s)
	I0130 20:44:07.960120   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.960161   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.960158   45441 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.095417539s)
	I0130 20:44:07.960143   45441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.206889959s)
	I0130 20:44:07.960223   45441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.398756648s)
	I0130 20:44:07.960234   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.960247   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.960190   45441 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-877742" to be "Ready" ...
	I0130 20:44:07.960251   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.960319   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.961892   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.961892   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.961902   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.961919   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.961921   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.961902   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.961934   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.961936   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.961941   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.961944   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.961950   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.961955   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.961970   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.961980   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.961990   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.962309   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.962319   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.962340   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.962348   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.962350   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.962357   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.962380   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.962380   45441 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-877742"
	I0130 20:44:07.962420   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.962439   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.979672   45441 node_ready.go:49] node "default-k8s-diff-port-877742" has status "Ready":"True"
	I0130 20:44:07.979700   45441 node_ready.go:38] duration metric: took 19.437813ms waiting for node "default-k8s-diff-port-877742" to be "Ready" ...
	I0130 20:44:07.979713   45441 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:08.005989   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:08.006020   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:08.006266   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:08.006287   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:08.006286   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:08.008091   45441 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0130 20:44:05.329467   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:07.826212   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:08.009918   45441 addons.go:505] enable addons completed in 2.713172208s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0130 20:44:08.032478   45441 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tlb8h" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.539497   45441 pod_ready.go:92] pod "coredns-5dd5756b68-tlb8h" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.539527   45441 pod_ready.go:81] duration metric: took 1.50701275s waiting for pod "coredns-5dd5756b68-tlb8h" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.539537   45441 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.545068   45441 pod_ready.go:92] pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.545090   45441 pod_ready.go:81] duration metric: took 5.546681ms waiting for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.545099   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.550794   45441 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.550817   45441 pod_ready.go:81] duration metric: took 5.711144ms waiting for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.550829   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.556050   45441 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.556068   45441 pod_ready.go:81] duration metric: took 5.232882ms waiting for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.556076   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-59zvd" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.562849   45441 pod_ready.go:92] pod "kube-proxy-59zvd" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.562866   45441 pod_ready.go:81] duration metric: took 6.784197ms waiting for pod "kube-proxy-59zvd" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.562874   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.965815   45441 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.965846   45441 pod_ready.go:81] duration metric: took 402.96387ms waiting for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.965860   45441 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:07.265985   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:09.765494   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:10.326063   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:12.825921   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:11.974724   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:14.473879   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:12.265674   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:14.765546   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:15.325945   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:17.326041   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:16.974143   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:19.473552   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:16.765691   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:18.766995   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:19.824366   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:21.824919   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:24.318779   45819 pod_ready.go:81] duration metric: took 4m0.000598437s waiting for pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace to be "Ready" ...
	E0130 20:44:24.318808   45819 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 20:44:24.318829   45819 pod_ready.go:38] duration metric: took 4m1.194970045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:24.318872   45819 kubeadm.go:640] restartCluster took 5m9.285235807s
	W0130 20:44:24.318943   45819 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 20:44:24.318974   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 20:44:21.973193   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:23.974160   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:21.263429   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:23.263586   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:23.263609   44923 pod_ready.go:81] duration metric: took 4m0.006890289s waiting for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	E0130 20:44:23.263618   44923 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 20:44:23.263625   44923 pod_ready.go:38] duration metric: took 4m4.564565945s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:23.263637   44923 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:44:23.263671   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:44:23.263711   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:44:23.319983   44923 cri.go:89] found id: "ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:23.320013   44923 cri.go:89] found id: ""
	I0130 20:44:23.320023   44923 logs.go:276] 1 containers: [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e]
	I0130 20:44:23.320078   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.325174   44923 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:44:23.325239   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:44:23.375914   44923 cri.go:89] found id: "b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:23.375952   44923 cri.go:89] found id: ""
	I0130 20:44:23.375960   44923 logs.go:276] 1 containers: [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901]
	I0130 20:44:23.376003   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.380265   44923 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:44:23.380324   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:44:23.428507   44923 cri.go:89] found id: "3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:23.428534   44923 cri.go:89] found id: ""
	I0130 20:44:23.428544   44923 logs.go:276] 1 containers: [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c]
	I0130 20:44:23.428591   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.434113   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:44:23.434184   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:44:23.522888   44923 cri.go:89] found id: "39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:23.522915   44923 cri.go:89] found id: ""
	I0130 20:44:23.522922   44923 logs.go:276] 1 containers: [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79]
	I0130 20:44:23.522964   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.534952   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:44:23.535015   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:44:23.576102   44923 cri.go:89] found id: "880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:23.576129   44923 cri.go:89] found id: ""
	I0130 20:44:23.576138   44923 logs.go:276] 1 containers: [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689]
	I0130 20:44:23.576185   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.580463   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:44:23.580527   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:44:23.620990   44923 cri.go:89] found id: "10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:23.621011   44923 cri.go:89] found id: ""
	I0130 20:44:23.621018   44923 logs.go:276] 1 containers: [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f]
	I0130 20:44:23.621069   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.625706   44923 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:44:23.625762   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:44:23.666341   44923 cri.go:89] found id: ""
	I0130 20:44:23.666368   44923 logs.go:276] 0 containers: []
	W0130 20:44:23.666378   44923 logs.go:278] No container was found matching "kindnet"
	I0130 20:44:23.666384   44923 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:44:23.666441   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:44:23.707229   44923 cri.go:89] found id: "e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:23.707248   44923 cri.go:89] found id: "748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:23.707252   44923 cri.go:89] found id: ""
	I0130 20:44:23.707258   44923 logs.go:276] 2 containers: [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446]
	I0130 20:44:23.707314   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.711242   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.715859   44923 logs.go:123] Gathering logs for kube-apiserver [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e] ...
	I0130 20:44:23.715883   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:23.775696   44923 logs.go:123] Gathering logs for storage-provisioner [748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446] ...
	I0130 20:44:23.775722   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:23.817767   44923 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:44:23.817796   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:44:24.301934   44923 logs.go:123] Gathering logs for container status ...
	I0130 20:44:24.301969   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:44:24.361236   44923 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:44:24.361265   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:44:24.511849   44923 logs.go:123] Gathering logs for kube-controller-manager [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f] ...
	I0130 20:44:24.511886   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:24.573648   44923 logs.go:123] Gathering logs for etcd [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901] ...
	I0130 20:44:24.573683   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:24.620572   44923 logs.go:123] Gathering logs for kubelet ...
	I0130 20:44:24.620608   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:44:24.687312   44923 logs.go:123] Gathering logs for dmesg ...
	I0130 20:44:24.687346   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:44:24.702224   44923 logs.go:123] Gathering logs for kube-proxy [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689] ...
	I0130 20:44:24.702262   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:24.749188   44923 logs.go:123] Gathering logs for storage-provisioner [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0] ...
	I0130 20:44:24.749218   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:24.793069   44923 logs.go:123] Gathering logs for coredns [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c] ...
	I0130 20:44:24.793093   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:24.829705   44923 logs.go:123] Gathering logs for kube-scheduler [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79] ...
	I0130 20:44:24.829730   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:29.263901   45819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.944900372s)
	I0130 20:44:29.263978   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:29.277198   45819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:44:29.286661   45819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:44:29.297088   45819 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:44:29.297129   45819 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0130 20:44:29.360347   45819 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0130 20:44:29.360446   45819 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 20:44:29.516880   45819 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 20:44:29.517075   45819 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 20:44:29.517217   45819 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 20:44:29.756175   45819 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 20:44:29.756323   45819 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 20:44:29.764820   45819 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0130 20:44:29.907654   45819 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 20:44:26.473595   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:28.473808   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:29.909307   45819 out.go:204]   - Generating certificates and keys ...
	I0130 20:44:29.909397   45819 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 20:44:29.909484   45819 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 20:44:29.909578   45819 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 20:44:29.909674   45819 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 20:44:29.909784   45819 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 20:44:29.909866   45819 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 20:44:29.909974   45819 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 20:44:29.910057   45819 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 20:44:29.910163   45819 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 20:44:29.910266   45819 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 20:44:29.910316   45819 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 20:44:29.910409   45819 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 20:44:29.974805   45819 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 20:44:30.281258   45819 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 20:44:30.605015   45819 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 20:44:30.782125   45819 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 20:44:30.783329   45819 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 20:44:27.369691   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:44:27.393279   44923 api_server.go:72] duration metric: took 4m16.430750077s to wait for apiserver process to appear ...
	I0130 20:44:27.393306   44923 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:44:27.393355   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:44:27.393434   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:44:27.443366   44923 cri.go:89] found id: "ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:27.443390   44923 cri.go:89] found id: ""
	I0130 20:44:27.443400   44923 logs.go:276] 1 containers: [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e]
	I0130 20:44:27.443457   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.448963   44923 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:44:27.449021   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:44:27.502318   44923 cri.go:89] found id: "b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:27.502341   44923 cri.go:89] found id: ""
	I0130 20:44:27.502348   44923 logs.go:276] 1 containers: [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901]
	I0130 20:44:27.502398   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.507295   44923 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:44:27.507352   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:44:27.548224   44923 cri.go:89] found id: "3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:27.548247   44923 cri.go:89] found id: ""
	I0130 20:44:27.548255   44923 logs.go:276] 1 containers: [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c]
	I0130 20:44:27.548299   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.552806   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:44:27.552864   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:44:27.608403   44923 cri.go:89] found id: "39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:27.608434   44923 cri.go:89] found id: ""
	I0130 20:44:27.608444   44923 logs.go:276] 1 containers: [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79]
	I0130 20:44:27.608523   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.613370   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:44:27.613435   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:44:27.668380   44923 cri.go:89] found id: "880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:27.668406   44923 cri.go:89] found id: ""
	I0130 20:44:27.668417   44923 logs.go:276] 1 containers: [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689]
	I0130 20:44:27.668470   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.673171   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:44:27.673231   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:44:27.720444   44923 cri.go:89] found id: "10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:27.720473   44923 cri.go:89] found id: ""
	I0130 20:44:27.720483   44923 logs.go:276] 1 containers: [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f]
	I0130 20:44:27.720546   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.725007   44923 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:44:27.725062   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:44:27.772186   44923 cri.go:89] found id: ""
	I0130 20:44:27.772214   44923 logs.go:276] 0 containers: []
	W0130 20:44:27.772224   44923 logs.go:278] No container was found matching "kindnet"
	I0130 20:44:27.772231   44923 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:44:27.772288   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:44:27.813222   44923 cri.go:89] found id: "e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:27.813259   44923 cri.go:89] found id: "748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:27.813268   44923 cri.go:89] found id: ""
	I0130 20:44:27.813286   44923 logs.go:276] 2 containers: [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446]
	I0130 20:44:27.813347   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.817565   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.821737   44923 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:44:27.821759   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:44:28.299900   44923 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:44:28.299933   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:44:28.441830   44923 logs.go:123] Gathering logs for storage-provisioner [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0] ...
	I0130 20:44:28.441866   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:28.485579   44923 logs.go:123] Gathering logs for dmesg ...
	I0130 20:44:28.485611   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:44:28.500668   44923 logs.go:123] Gathering logs for kube-controller-manager [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f] ...
	I0130 20:44:28.500691   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:28.558472   44923 logs.go:123] Gathering logs for storage-provisioner [748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446] ...
	I0130 20:44:28.558502   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:28.604655   44923 logs.go:123] Gathering logs for kubelet ...
	I0130 20:44:28.604687   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:44:28.670010   44923 logs.go:123] Gathering logs for kube-proxy [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689] ...
	I0130 20:44:28.670041   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:28.712222   44923 logs.go:123] Gathering logs for coredns [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c] ...
	I0130 20:44:28.712259   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:28.764243   44923 logs.go:123] Gathering logs for kube-scheduler [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79] ...
	I0130 20:44:28.764276   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:28.801930   44923 logs.go:123] Gathering logs for container status ...
	I0130 20:44:28.801956   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:44:28.848585   44923 logs.go:123] Gathering logs for kube-apiserver [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e] ...
	I0130 20:44:28.848612   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:28.902903   44923 logs.go:123] Gathering logs for etcd [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901] ...
	I0130 20:44:28.902936   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:30.785050   45819 out.go:204]   - Booting up control plane ...
	I0130 20:44:30.785155   45819 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 20:44:30.790853   45819 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 20:44:30.798657   45819 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 20:44:30.799425   45819 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 20:44:30.801711   45819 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 20:44:30.475584   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:32.973843   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:34.974144   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:31.454103   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:44:31.460009   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 200:
	ok
	I0130 20:44:31.461505   44923 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 20:44:31.461527   44923 api_server.go:131] duration metric: took 4.068214052s to wait for apiserver health ...
	I0130 20:44:31.461537   44923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:44:31.461563   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:44:31.461626   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:44:31.509850   44923 cri.go:89] found id: "ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:31.509874   44923 cri.go:89] found id: ""
	I0130 20:44:31.509884   44923 logs.go:276] 1 containers: [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e]
	I0130 20:44:31.509941   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.514078   44923 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:44:31.514136   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:44:31.555581   44923 cri.go:89] found id: "b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:31.555605   44923 cri.go:89] found id: ""
	I0130 20:44:31.555613   44923 logs.go:276] 1 containers: [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901]
	I0130 20:44:31.555674   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.559888   44923 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:44:31.559948   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:44:31.620256   44923 cri.go:89] found id: "3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:31.620285   44923 cri.go:89] found id: ""
	I0130 20:44:31.620295   44923 logs.go:276] 1 containers: [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c]
	I0130 20:44:31.620352   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.626003   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:44:31.626064   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:44:31.662862   44923 cri.go:89] found id: "39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:31.662889   44923 cri.go:89] found id: ""
	I0130 20:44:31.662899   44923 logs.go:276] 1 containers: [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79]
	I0130 20:44:31.662972   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.668242   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:44:31.668306   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:44:31.717065   44923 cri.go:89] found id: "880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:31.717089   44923 cri.go:89] found id: ""
	I0130 20:44:31.717098   44923 logs.go:276] 1 containers: [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689]
	I0130 20:44:31.717160   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.722195   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:44:31.722250   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:44:31.779789   44923 cri.go:89] found id: "10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:31.779812   44923 cri.go:89] found id: ""
	I0130 20:44:31.779821   44923 logs.go:276] 1 containers: [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f]
	I0130 20:44:31.779894   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.784710   44923 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:44:31.784776   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:44:31.826045   44923 cri.go:89] found id: ""
	I0130 20:44:31.826073   44923 logs.go:276] 0 containers: []
	W0130 20:44:31.826082   44923 logs.go:278] No container was found matching "kindnet"
	I0130 20:44:31.826087   44923 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:44:31.826131   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:44:31.868212   44923 cri.go:89] found id: "e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:31.868236   44923 cri.go:89] found id: "748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:31.868243   44923 cri.go:89] found id: ""
	I0130 20:44:31.868253   44923 logs.go:276] 2 containers: [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446]
	I0130 20:44:31.868314   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.873019   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.877432   44923 logs.go:123] Gathering logs for storage-provisioner [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0] ...
	I0130 20:44:31.877456   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:31.915888   44923 logs.go:123] Gathering logs for storage-provisioner [748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446] ...
	I0130 20:44:31.915915   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:31.972950   44923 logs.go:123] Gathering logs for kubelet ...
	I0130 20:44:31.972978   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:44:32.028993   44923 logs.go:123] Gathering logs for dmesg ...
	I0130 20:44:32.029028   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:44:32.046602   44923 logs.go:123] Gathering logs for etcd [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901] ...
	I0130 20:44:32.046633   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:32.094088   44923 logs.go:123] Gathering logs for kube-proxy [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689] ...
	I0130 20:44:32.094123   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:32.138616   44923 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:44:32.138645   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:44:32.526995   44923 logs.go:123] Gathering logs for container status ...
	I0130 20:44:32.527033   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:44:32.591970   44923 logs.go:123] Gathering logs for kube-apiserver [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e] ...
	I0130 20:44:32.592003   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:32.655438   44923 logs.go:123] Gathering logs for coredns [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c] ...
	I0130 20:44:32.655466   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:32.707131   44923 logs.go:123] Gathering logs for kube-scheduler [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79] ...
	I0130 20:44:32.707163   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:32.749581   44923 logs.go:123] Gathering logs for kube-controller-manager [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f] ...
	I0130 20:44:32.749610   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:32.815778   44923 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:44:32.815805   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:44:35.448121   44923 system_pods.go:59] 8 kube-system pods found
	I0130 20:44:35.448155   44923 system_pods.go:61] "coredns-76f75df574-d4c7t" [a8701b4d-0616-4c05-9ba0-0157adae2d13] Running
	I0130 20:44:35.448162   44923 system_pods.go:61] "etcd-no-preload-473743" [ed931ab3-95d8-4115-ae97-1c274ed8432d] Running
	I0130 20:44:35.448169   44923 system_pods.go:61] "kube-apiserver-no-preload-473743" [64b9b17c-6df5-41db-a308-b0deba016c9d] Running
	I0130 20:44:35.448175   44923 system_pods.go:61] "kube-controller-manager-no-preload-473743" [a28d8dc6-244a-4dfa-9d7f-468281823332] Running
	I0130 20:44:35.448181   44923 system_pods.go:61] "kube-proxy-zklzt" [fa94d19c-b0d6-4e78-86e8-e6b5f3608753] Running
	I0130 20:44:35.448188   44923 system_pods.go:61] "kube-scheduler-no-preload-473743" [b8f8066b-8644-42c3-b47a-52e34210e410] Running
	I0130 20:44:35.448198   44923 system_pods.go:61] "metrics-server-57f55c9bc5-wzb2g" [cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:44:35.448210   44923 system_pods.go:61] "storage-provisioner" [a257b079-cb6e-45fd-b05d-9ad6fa26225e] Running
	I0130 20:44:35.448221   44923 system_pods.go:74] duration metric: took 3.986678023s to wait for pod list to return data ...
	I0130 20:44:35.448227   44923 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:44:35.451377   44923 default_sa.go:45] found service account: "default"
	I0130 20:44:35.451397   44923 default_sa.go:55] duration metric: took 3.162882ms for default service account to be created ...
	I0130 20:44:35.451404   44923 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:44:35.457941   44923 system_pods.go:86] 8 kube-system pods found
	I0130 20:44:35.457962   44923 system_pods.go:89] "coredns-76f75df574-d4c7t" [a8701b4d-0616-4c05-9ba0-0157adae2d13] Running
	I0130 20:44:35.457969   44923 system_pods.go:89] "etcd-no-preload-473743" [ed931ab3-95d8-4115-ae97-1c274ed8432d] Running
	I0130 20:44:35.457976   44923 system_pods.go:89] "kube-apiserver-no-preload-473743" [64b9b17c-6df5-41db-a308-b0deba016c9d] Running
	I0130 20:44:35.457983   44923 system_pods.go:89] "kube-controller-manager-no-preload-473743" [a28d8dc6-244a-4dfa-9d7f-468281823332] Running
	I0130 20:44:35.457992   44923 system_pods.go:89] "kube-proxy-zklzt" [fa94d19c-b0d6-4e78-86e8-e6b5f3608753] Running
	I0130 20:44:35.457999   44923 system_pods.go:89] "kube-scheduler-no-preload-473743" [b8f8066b-8644-42c3-b47a-52e34210e410] Running
	I0130 20:44:35.458013   44923 system_pods.go:89] "metrics-server-57f55c9bc5-wzb2g" [cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:44:35.458023   44923 system_pods.go:89] "storage-provisioner" [a257b079-cb6e-45fd-b05d-9ad6fa26225e] Running
	I0130 20:44:35.458032   44923 system_pods.go:126] duration metric: took 6.622973ms to wait for k8s-apps to be running ...
	I0130 20:44:35.458040   44923 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:44:35.458085   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:35.478158   44923 system_svc.go:56] duration metric: took 20.107762ms WaitForService to wait for kubelet.
	I0130 20:44:35.478182   44923 kubeadm.go:581] duration metric: took 4m24.515659177s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:44:35.478205   44923 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:44:35.481624   44923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:44:35.481649   44923 node_conditions.go:123] node cpu capacity is 2
	I0130 20:44:35.481661   44923 node_conditions.go:105] duration metric: took 3.450762ms to run NodePressure ...
	I0130 20:44:35.481674   44923 start.go:228] waiting for startup goroutines ...
	I0130 20:44:35.481682   44923 start.go:233] waiting for cluster config update ...
	I0130 20:44:35.481695   44923 start.go:242] writing updated cluster config ...
	I0130 20:44:35.481966   44923 ssh_runner.go:195] Run: rm -f paused
	I0130 20:44:35.534192   44923 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0130 20:44:35.537286   44923 out.go:177] * Done! kubectl is now configured to use "no-preload-473743" cluster and "default" namespace by default
	I0130 20:44:36.975176   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:39.472594   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:40.808532   45819 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.005048 seconds
	I0130 20:44:40.808703   45819 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 20:44:40.821445   45819 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 20:44:41.350196   45819 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 20:44:41.350372   45819 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-150971 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0130 20:44:41.859169   45819 kubeadm.go:322] [bootstrap-token] Using token: vlkrdr.8ubylscclgt88ll2
	I0130 20:44:41.862311   45819 out.go:204]   - Configuring RBAC rules ...
	I0130 20:44:41.862450   45819 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 20:44:41.870072   45819 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 20:44:41.874429   45819 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 20:44:41.883936   45819 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 20:44:41.887738   45819 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 20:44:41.963361   45819 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 20:44:42.299030   45819 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 20:44:42.300623   45819 kubeadm.go:322] 
	I0130 20:44:42.300708   45819 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 20:44:42.300721   45819 kubeadm.go:322] 
	I0130 20:44:42.300820   45819 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 20:44:42.300845   45819 kubeadm.go:322] 
	I0130 20:44:42.300886   45819 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 20:44:42.300975   45819 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 20:44:42.301048   45819 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 20:44:42.301061   45819 kubeadm.go:322] 
	I0130 20:44:42.301126   45819 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 20:44:42.301241   45819 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 20:44:42.301309   45819 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 20:44:42.301326   45819 kubeadm.go:322] 
	I0130 20:44:42.301417   45819 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0130 20:44:42.301482   45819 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 20:44:42.301488   45819 kubeadm.go:322] 
	I0130 20:44:42.301554   45819 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vlkrdr.8ubylscclgt88ll2 \
	I0130 20:44:42.301684   45819 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 \
	I0130 20:44:42.301717   45819 kubeadm.go:322]     --control-plane 	  
	I0130 20:44:42.301726   45819 kubeadm.go:322] 
	I0130 20:44:42.301827   45819 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 20:44:42.301844   45819 kubeadm.go:322] 
	I0130 20:44:42.301984   45819 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vlkrdr.8ubylscclgt88ll2 \
	I0130 20:44:42.302116   45819 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 
	I0130 20:44:42.302689   45819 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 20:44:42.302726   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:44:42.302739   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:44:42.305197   45819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:44:42.306389   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:44:42.357619   45819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:44:42.381081   45819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:44:42.381189   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:42.381196   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218 minikube.k8s.io/name=old-k8s-version-150971 minikube.k8s.io/updated_at=2024_01_30T20_44_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:42.406368   45819 ops.go:34] apiserver oom_adj: -16
	I0130 20:44:42.639356   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:43.139439   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:43.640260   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:44.140080   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:44.639587   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:41.473598   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:43.474059   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:45.140354   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:45.640062   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:46.140282   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:46.639400   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:47.140308   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:47.640045   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:48.139406   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:48.640423   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:49.139702   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:49.640036   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:45.973530   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:47.974364   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:49.974551   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:50.139435   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:50.639471   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:51.140088   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:51.639444   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:52.139401   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:52.639731   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:53.140050   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:53.639411   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:54.139942   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:54.640279   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:52.473624   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:54.474924   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:55.139610   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:55.639431   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:56.140267   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:56.639444   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:57.140068   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:57.296527   45819 kubeadm.go:1088] duration metric: took 14.915402679s to wait for elevateKubeSystemPrivileges.
	I0130 20:44:57.296567   45819 kubeadm.go:406] StartCluster complete in 5m42.316503122s
	I0130 20:44:57.296588   45819 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:57.296672   45819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:44:57.298762   45819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:57.299005   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:44:57.299123   45819 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:44:57.299208   45819 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-150971"
	I0130 20:44:57.299220   45819 config.go:182] Loaded profile config "old-k8s-version-150971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 20:44:57.299229   45819 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-150971"
	W0130 20:44:57.299241   45819 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:44:57.299220   45819 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-150971"
	I0130 20:44:57.299300   45819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-150971"
	I0130 20:44:57.299315   45819 host.go:66] Checking if "old-k8s-version-150971" exists ...
	I0130 20:44:57.299247   45819 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-150971"
	I0130 20:44:57.299387   45819 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-150971"
	W0130 20:44:57.299397   45819 addons.go:243] addon metrics-server should already be in state true
	I0130 20:44:57.299433   45819 host.go:66] Checking if "old-k8s-version-150971" exists ...
	I0130 20:44:57.299705   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.299726   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.299756   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.299760   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.299796   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.299897   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.319159   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38823
	I0130 20:44:57.319202   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45589
	I0130 20:44:57.319167   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34823
	I0130 20:44:57.319578   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.319707   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.319771   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.320071   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.320103   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.320242   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.320261   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.320408   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.320423   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.320586   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.320630   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.321140   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.321158   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.321591   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.321624   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.321675   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.321705   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.325091   45819 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-150971"
	W0130 20:44:57.325106   45819 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:44:57.325125   45819 host.go:66] Checking if "old-k8s-version-150971" exists ...
	I0130 20:44:57.325420   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.325442   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.342652   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
	I0130 20:44:57.342787   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41961
	I0130 20:44:57.343203   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.343303   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.343745   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.343779   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.343848   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44027
	I0130 20:44:57.343887   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.343903   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.344220   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.344244   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.344220   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.344493   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.344494   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.344707   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.344730   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.345083   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.346139   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.346172   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.346830   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:44:57.346891   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:44:57.348974   45819 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:44:57.350330   45819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:44:57.350364   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:44:57.351707   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:44:57.351729   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:44:57.351684   45819 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:57.351795   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:44:57.351821   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:44:57.356145   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.356428   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.356595   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:44:57.356621   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.356767   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:44:57.357040   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:44:57.357095   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:44:57.357123   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.357218   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:44:57.357266   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:44:57.357458   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:44:57.357451   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:44:57.357617   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:44:57.357754   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:44:57.362806   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I0130 20:44:57.363167   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.363742   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.363770   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.364074   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.364280   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.365877   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:44:57.366086   45819 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:57.366096   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:44:57.366107   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:44:57.369237   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.369890   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:44:57.369930   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:44:57.369968   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.370351   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:44:57.370563   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:44:57.370712   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:44:57.509329   45819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:57.535146   45819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:57.536528   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 20:44:57.559042   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:44:57.559066   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:44:57.643054   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:44:57.643081   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:44:57.773561   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:57.773588   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:44:57.848668   45819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:57.910205   45819 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-150971" context rescaled to 1 replicas
	I0130 20:44:57.910247   45819 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:44:57.912390   45819 out.go:177] * Verifying Kubernetes components...
	I0130 20:44:57.913764   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:58.721986   45819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.186811658s)
	I0130 20:44:58.722033   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722045   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722145   45819 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.185575635s)
	I0130 20:44:58.722210   45819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.212845439s)
	I0130 20:44:58.722213   45819 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0130 20:44:58.722254   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722271   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722347   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.722359   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.722371   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.722381   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722391   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722537   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.722576   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.722593   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.722611   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722621   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722659   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.722675   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.724251   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.724291   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.724304   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.798383   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.798410   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.798745   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.798767   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.798816   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:59.125243   45819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.276531373s)
	I0130 20:44:59.125305   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:59.125322   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:59.125256   45819 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.211465342s)
	I0130 20:44:59.125360   45819 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-150971" to be "Ready" ...
	I0130 20:44:59.125612   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:59.125639   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:59.125650   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:59.125650   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:59.125659   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:59.125902   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:59.125953   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:59.125963   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:59.125972   45819 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-150971"
	I0130 20:44:59.127634   45819 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 20:44:59.129415   45819 addons.go:505] enable addons completed in 1.830294624s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 20:44:59.141691   45819 node_ready.go:49] node "old-k8s-version-150971" has status "Ready":"True"
	I0130 20:44:59.141715   45819 node_ready.go:38] duration metric: took 16.331635ms waiting for node "old-k8s-version-150971" to be "Ready" ...
	I0130 20:44:59.141725   45819 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:59.146645   45819 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-7qhmc" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:56.475086   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:58.973370   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:00.161718   45819 pod_ready.go:92] pod "coredns-5644d7b6d9-7qhmc" in "kube-system" namespace has status "Ready":"True"
	I0130 20:45:00.161741   45819 pod_ready.go:81] duration metric: took 1.015069343s waiting for pod "coredns-5644d7b6d9-7qhmc" in "kube-system" namespace to be "Ready" ...
	I0130 20:45:00.161754   45819 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zbdxm" in "kube-system" namespace to be "Ready" ...
	I0130 20:45:00.668280   45819 pod_ready.go:92] pod "kube-proxy-zbdxm" in "kube-system" namespace has status "Ready":"True"
	I0130 20:45:00.668313   45819 pod_ready.go:81] duration metric: took 506.550797ms waiting for pod "kube-proxy-zbdxm" in "kube-system" namespace to be "Ready" ...
	I0130 20:45:00.668328   45819 pod_ready.go:38] duration metric: took 1.526591158s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:45:00.668343   45819 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:45:00.668398   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:45:00.682119   45819 api_server.go:72] duration metric: took 2.771845703s to wait for apiserver process to appear ...
	I0130 20:45:00.682143   45819 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:45:00.682167   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:45:00.687603   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0130 20:45:00.688287   45819 api_server.go:141] control plane version: v1.16.0
	I0130 20:45:00.688302   45819 api_server.go:131] duration metric: took 6.153997ms to wait for apiserver health ...
	I0130 20:45:00.688309   45819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:45:00.691917   45819 system_pods.go:59] 4 kube-system pods found
	I0130 20:45:00.691936   45819 system_pods.go:61] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:00.691942   45819 system_pods.go:61] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:00.691948   45819 system_pods.go:61] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:00.691954   45819 system_pods.go:61] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:45:00.691962   45819 system_pods.go:74] duration metric: took 3.648521ms to wait for pod list to return data ...
	I0130 20:45:00.691970   45819 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:45:00.694229   45819 default_sa.go:45] found service account: "default"
	I0130 20:45:00.694250   45819 default_sa.go:55] duration metric: took 2.274248ms for default service account to be created ...
	I0130 20:45:00.694258   45819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:45:00.698156   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:00.698179   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:00.698187   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:00.698198   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:00.698210   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:45:00.698234   45819 retry.go:31] will retry after 277.03208ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:00.979637   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:00.979660   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:00.979665   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:00.979671   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:00.979677   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:45:00.979694   45819 retry.go:31] will retry after 341.469517ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:01.326631   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:01.326666   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:01.326674   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:01.326683   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:01.326689   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:01.326713   45819 retry.go:31] will retry after 487.104661ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:01.818702   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:01.818733   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:01.818742   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:01.818752   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:01.818759   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:01.818779   45819 retry.go:31] will retry after 574.423042ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:02.398901   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:02.398940   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:02.398949   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:02.398959   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:02.398966   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:02.398986   45819 retry.go:31] will retry after 741.538469ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:03.145137   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:03.145162   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:03.145168   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:03.145174   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:03.145179   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:03.145194   45819 retry.go:31] will retry after 742.915086ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:03.892722   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:03.892748   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:03.892753   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:03.892759   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:03.892764   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:03.892779   45819 retry.go:31] will retry after 786.727719ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:01.473056   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:03.473346   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:04.685933   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:04.685967   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:04.685976   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:04.685985   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:04.685993   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:04.686016   45819 retry.go:31] will retry after 1.232157955s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:05.923020   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:05.923045   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:05.923050   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:05.923056   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:05.923061   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:05.923076   45819 retry.go:31] will retry after 1.652424416s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:07.580982   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:07.581007   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:07.581013   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:07.581019   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:07.581026   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:07.581042   45819 retry.go:31] will retry after 1.774276151s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:09.360073   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:09.360098   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:09.360103   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:09.360110   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:09.360115   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:09.360133   45819 retry.go:31] will retry after 2.786181653s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:05.975152   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:07.975274   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:12.151191   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:12.151215   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:12.151221   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:12.151227   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:12.151232   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:12.151258   45819 retry.go:31] will retry after 3.456504284s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:10.472793   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:12.474310   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:14.977715   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:15.613679   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:15.613705   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:15.613711   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:15.613718   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:15.613722   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:15.613741   45819 retry.go:31] will retry after 4.434906632s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:17.472993   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:19.473530   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:20.053023   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:20.053050   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:20.053055   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:20.053062   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:20.053066   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:20.053082   45819 retry.go:31] will retry after 3.910644554s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:23.969998   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:23.970027   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:23.970035   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:23.970047   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:23.970053   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:23.970075   45819 retry.go:31] will retry after 4.907431581s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:21.473946   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:23.973965   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:28.881886   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:28.881911   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:28.881917   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:28.881924   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:28.881929   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:28.881956   45819 retry.go:31] will retry after 7.594967181s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:26.473519   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:28.474676   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:30.972445   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:32.973156   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:34.973590   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:36.482226   45819 system_pods.go:86] 5 kube-system pods found
	I0130 20:45:36.482255   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:36.482261   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:36.482267   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Pending
	I0130 20:45:36.482277   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:36.482284   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:36.482306   45819 retry.go:31] will retry after 8.875079493s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:36.974189   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:39.474803   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:41.973709   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:43.974130   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:45.361733   45819 system_pods.go:86] 5 kube-system pods found
	I0130 20:45:45.361760   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:45.361766   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:45.361772   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:45:45.361781   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:45.361789   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:45.361820   45819 retry.go:31] will retry after 9.918306048s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0130 20:45:45.976853   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:48.476619   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:50.974748   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:52.975900   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:55.285765   45819 system_pods.go:86] 6 kube-system pods found
	I0130 20:45:55.285793   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:55.285801   45819 system_pods.go:89] "kube-apiserver-old-k8s-version-150971" [14975616-ba41-4199-b0e3-179dc01def2d] Pending
	I0130 20:45:55.285807   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:55.285813   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:45:55.285822   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:55.285828   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:55.285849   45819 retry.go:31] will retry after 12.684125727s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0130 20:45:55.473705   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:57.973533   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:59.974108   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:02.473825   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:04.973953   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:07.975898   45819 system_pods.go:86] 8 kube-system pods found
	I0130 20:46:07.975923   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:46:07.975929   45819 system_pods.go:89] "etcd-old-k8s-version-150971" [21884345-e587-4bae-88b9-78e0bdacf954] Running
	I0130 20:46:07.975933   45819 system_pods.go:89] "kube-apiserver-old-k8s-version-150971" [14975616-ba41-4199-b0e3-179dc01def2d] Running
	I0130 20:46:07.975937   45819 system_pods.go:89] "kube-controller-manager-old-k8s-version-150971" [f0cfbd77-f00e-4d40-a301-f24f6ed937e1] Pending
	I0130 20:46:07.975941   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:46:07.975944   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:46:07.975951   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:46:07.975955   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:46:07.975969   45819 retry.go:31] will retry after 15.59894457s: missing components: kube-controller-manager
	I0130 20:46:07.472712   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:09.474175   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:11.478228   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:13.973190   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:16.473264   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:18.474418   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:23.581862   45819 system_pods.go:86] 8 kube-system pods found
	I0130 20:46:23.581890   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:46:23.581895   45819 system_pods.go:89] "etcd-old-k8s-version-150971" [21884345-e587-4bae-88b9-78e0bdacf954] Running
	I0130 20:46:23.581899   45819 system_pods.go:89] "kube-apiserver-old-k8s-version-150971" [14975616-ba41-4199-b0e3-179dc01def2d] Running
	I0130 20:46:23.581904   45819 system_pods.go:89] "kube-controller-manager-old-k8s-version-150971" [f0cfbd77-f00e-4d40-a301-f24f6ed937e1] Running
	I0130 20:46:23.581907   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:46:23.581911   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:46:23.581918   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:46:23.581923   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:46:23.581932   45819 system_pods.go:126] duration metric: took 1m22.887668504s to wait for k8s-apps to be running ...
	I0130 20:46:23.581939   45819 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:46:23.581986   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:46:23.604099   45819 system_svc.go:56] duration metric: took 22.14886ms WaitForService to wait for kubelet.
	I0130 20:46:23.604134   45819 kubeadm.go:581] duration metric: took 1m25.693865663s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:46:23.604159   45819 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:46:23.607539   45819 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:46:23.607567   45819 node_conditions.go:123] node cpu capacity is 2
	I0130 20:46:23.607580   45819 node_conditions.go:105] duration metric: took 3.415829ms to run NodePressure ...
	I0130 20:46:23.607594   45819 start.go:228] waiting for startup goroutines ...
	I0130 20:46:23.607602   45819 start.go:233] waiting for cluster config update ...
	I0130 20:46:23.607615   45819 start.go:242] writing updated cluster config ...
	I0130 20:46:23.607933   45819 ssh_runner.go:195] Run: rm -f paused
	I0130 20:46:23.658357   45819 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0130 20:46:23.660375   45819 out.go:177] 
	W0130 20:46:23.661789   45819 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0130 20:46:23.663112   45819 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0130 20:46:23.664623   45819 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-150971" cluster and "default" namespace by default
	I0130 20:46:20.474791   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:22.973143   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:24.974320   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:27.474508   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:29.973471   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:31.973727   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:33.974180   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:36.472928   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:38.474336   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:40.973509   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:42.973942   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:45.473120   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:47.972943   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:49.973756   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:51.973913   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:54.472597   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:56.473076   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:58.974262   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:01.476906   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:03.974275   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:06.474453   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:08.973144   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:10.973407   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:12.974842   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:15.473765   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:17.474938   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:19.973849   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:21.974660   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:23.977144   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:26.479595   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:28.975572   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:31.473715   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:33.974243   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:36.472321   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:38.473133   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:40.973786   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:43.473691   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:45.476882   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:47.975923   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:50.474045   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:52.474411   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:54.474531   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:56.973542   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:58.974226   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:00.975045   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:03.473440   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:05.473667   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:07.973417   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:09.978199   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:09.978230   45441 pod_ready.go:81] duration metric: took 4m0.012361166s waiting for pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace to be "Ready" ...
	E0130 20:48:09.978243   45441 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 20:48:09.978253   45441 pod_ready.go:38] duration metric: took 4m1.998529694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:48:09.978276   45441 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:48:09.978323   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:48:09.978403   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:48:10.038921   45441 cri.go:89] found id: "39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:10.038949   45441 cri.go:89] found id: ""
	I0130 20:48:10.038958   45441 logs.go:276] 1 containers: [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481]
	I0130 20:48:10.039017   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.043851   45441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:48:10.043902   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:48:10.088920   45441 cri.go:89] found id: "1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:10.088945   45441 cri.go:89] found id: ""
	I0130 20:48:10.088952   45441 logs.go:276] 1 containers: [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15]
	I0130 20:48:10.089001   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.094186   45441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:48:10.094267   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:48:10.143350   45441 cri.go:89] found id: "215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:10.143380   45441 cri.go:89] found id: ""
	I0130 20:48:10.143390   45441 logs.go:276] 1 containers: [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb]
	I0130 20:48:10.143450   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.148357   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:48:10.148426   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:48:10.187812   45441 cri.go:89] found id: "8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:10.187848   45441 cri.go:89] found id: ""
	I0130 20:48:10.187858   45441 logs.go:276] 1 containers: [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7]
	I0130 20:48:10.187914   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.192049   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:48:10.192109   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:48:10.241052   45441 cri.go:89] found id: "c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:10.241079   45441 cri.go:89] found id: ""
	I0130 20:48:10.241088   45441 logs.go:276] 1 containers: [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe]
	I0130 20:48:10.241139   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.245711   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:48:10.245763   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:48:10.287115   45441 cri.go:89] found id: "1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:10.287139   45441 cri.go:89] found id: ""
	I0130 20:48:10.287148   45441 logs.go:276] 1 containers: [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed]
	I0130 20:48:10.287194   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.291627   45441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:48:10.291697   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:48:10.341321   45441 cri.go:89] found id: ""
	I0130 20:48:10.341346   45441 logs.go:276] 0 containers: []
	W0130 20:48:10.341356   45441 logs.go:278] No container was found matching "kindnet"
	I0130 20:48:10.341362   45441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:48:10.341420   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:48:10.385515   45441 cri.go:89] found id: "f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:10.385543   45441 cri.go:89] found id: ""
	I0130 20:48:10.385552   45441 logs.go:276] 1 containers: [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06]
	I0130 20:48:10.385601   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.390397   45441 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:48:10.390433   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:48:10.832689   45441 logs.go:123] Gathering logs for dmesg ...
	I0130 20:48:10.832724   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:48:10.846560   45441 logs.go:123] Gathering logs for storage-provisioner [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06] ...
	I0130 20:48:10.846587   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:10.887801   45441 logs.go:123] Gathering logs for kube-apiserver [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481] ...
	I0130 20:48:10.887826   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:10.942977   45441 logs.go:123] Gathering logs for etcd [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15] ...
	I0130 20:48:10.943003   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:10.987642   45441 logs.go:123] Gathering logs for coredns [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb] ...
	I0130 20:48:10.987669   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:11.024934   45441 logs.go:123] Gathering logs for kube-scheduler [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7] ...
	I0130 20:48:11.024964   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:11.076336   45441 logs.go:123] Gathering logs for kube-proxy [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe] ...
	I0130 20:48:11.076373   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:11.127315   45441 logs.go:123] Gathering logs for kube-controller-manager [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed] ...
	I0130 20:48:11.127344   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:11.182944   45441 logs.go:123] Gathering logs for kubelet ...
	I0130 20:48:11.182974   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:48:11.276494   45441 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:48:11.276525   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:48:11.413186   45441 logs.go:123] Gathering logs for container status ...
	I0130 20:48:11.413213   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:48:13.960537   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:48:13.977332   45441 api_server.go:72] duration metric: took 4m8.11544723s to wait for apiserver process to appear ...
	I0130 20:48:13.977362   45441 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:48:13.977400   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:48:13.977466   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:48:14.025510   45441 cri.go:89] found id: "39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:14.025534   45441 cri.go:89] found id: ""
	I0130 20:48:14.025542   45441 logs.go:276] 1 containers: [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481]
	I0130 20:48:14.025593   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.030025   45441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:48:14.030103   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:48:14.070504   45441 cri.go:89] found id: "1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:14.070524   45441 cri.go:89] found id: ""
	I0130 20:48:14.070531   45441 logs.go:276] 1 containers: [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15]
	I0130 20:48:14.070577   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.074858   45441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:48:14.074928   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:48:14.110816   45441 cri.go:89] found id: "215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:14.110844   45441 cri.go:89] found id: ""
	I0130 20:48:14.110853   45441 logs.go:276] 1 containers: [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb]
	I0130 20:48:14.110912   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.114997   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:48:14.115079   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:48:14.169213   45441 cri.go:89] found id: "8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:14.169240   45441 cri.go:89] found id: ""
	I0130 20:48:14.169249   45441 logs.go:276] 1 containers: [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7]
	I0130 20:48:14.169305   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.173541   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:48:14.173607   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:48:14.210634   45441 cri.go:89] found id: "c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:14.210657   45441 cri.go:89] found id: ""
	I0130 20:48:14.210664   45441 logs.go:276] 1 containers: [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe]
	I0130 20:48:14.210717   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.215015   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:48:14.215074   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:48:14.258454   45441 cri.go:89] found id: "1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:14.258477   45441 cri.go:89] found id: ""
	I0130 20:48:14.258484   45441 logs.go:276] 1 containers: [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed]
	I0130 20:48:14.258532   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.262486   45441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:48:14.262537   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:48:14.302175   45441 cri.go:89] found id: ""
	I0130 20:48:14.302205   45441 logs.go:276] 0 containers: []
	W0130 20:48:14.302213   45441 logs.go:278] No container was found matching "kindnet"
	I0130 20:48:14.302218   45441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:48:14.302262   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:48:14.339497   45441 cri.go:89] found id: "f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:14.339523   45441 cri.go:89] found id: ""
	I0130 20:48:14.339533   45441 logs.go:276] 1 containers: [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06]
	I0130 20:48:14.339589   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.343954   45441 logs.go:123] Gathering logs for kube-apiserver [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481] ...
	I0130 20:48:14.343983   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:14.391168   45441 logs.go:123] Gathering logs for coredns [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb] ...
	I0130 20:48:14.391203   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:14.436713   45441 logs.go:123] Gathering logs for kube-proxy [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe] ...
	I0130 20:48:14.436743   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:14.473899   45441 logs.go:123] Gathering logs for kube-controller-manager [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed] ...
	I0130 20:48:14.473934   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:14.533733   45441 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:48:14.533763   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:48:14.924087   45441 logs.go:123] Gathering logs for container status ...
	I0130 20:48:14.924121   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:48:14.972652   45441 logs.go:123] Gathering logs for kubelet ...
	I0130 20:48:14.972684   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:48:15.074398   45441 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:48:15.074443   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:48:15.206993   45441 logs.go:123] Gathering logs for kube-scheduler [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7] ...
	I0130 20:48:15.207026   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:15.258807   45441 logs.go:123] Gathering logs for storage-provisioner [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06] ...
	I0130 20:48:15.258841   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:15.299162   45441 logs.go:123] Gathering logs for dmesg ...
	I0130 20:48:15.299209   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:48:15.315611   45441 logs.go:123] Gathering logs for etcd [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15] ...
	I0130 20:48:15.315643   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:17.859914   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:48:17.865483   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 200:
	ok
	I0130 20:48:17.866876   45441 api_server.go:141] control plane version: v1.28.4
	I0130 20:48:17.866899   45441 api_server.go:131] duration metric: took 3.889528289s to wait for apiserver health ...
	I0130 20:48:17.866910   45441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:48:17.866937   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:48:17.866992   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:48:17.907357   45441 cri.go:89] found id: "39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:17.907386   45441 cri.go:89] found id: ""
	I0130 20:48:17.907396   45441 logs.go:276] 1 containers: [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481]
	I0130 20:48:17.907461   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:17.911558   45441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:48:17.911617   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:48:17.948725   45441 cri.go:89] found id: "1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:17.948747   45441 cri.go:89] found id: ""
	I0130 20:48:17.948757   45441 logs.go:276] 1 containers: [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15]
	I0130 20:48:17.948819   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:17.953304   45441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:48:17.953365   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:48:17.994059   45441 cri.go:89] found id: "215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:17.994091   45441 cri.go:89] found id: ""
	I0130 20:48:17.994101   45441 logs.go:276] 1 containers: [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb]
	I0130 20:48:17.994158   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:17.998347   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:48:17.998402   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:48:18.047814   45441 cri.go:89] found id: "8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:18.047842   45441 cri.go:89] found id: ""
	I0130 20:48:18.047853   45441 logs.go:276] 1 containers: [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7]
	I0130 20:48:18.047914   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.052864   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:48:18.052927   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:48:18.091597   45441 cri.go:89] found id: "c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:18.091617   45441 cri.go:89] found id: ""
	I0130 20:48:18.091625   45441 logs.go:276] 1 containers: [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe]
	I0130 20:48:18.091680   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.095921   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:48:18.096034   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:48:18.146922   45441 cri.go:89] found id: "1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:18.146942   45441 cri.go:89] found id: ""
	I0130 20:48:18.146952   45441 logs.go:276] 1 containers: [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed]
	I0130 20:48:18.147002   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.156610   45441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:48:18.156671   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:48:18.209680   45441 cri.go:89] found id: ""
	I0130 20:48:18.209701   45441 logs.go:276] 0 containers: []
	W0130 20:48:18.209711   45441 logs.go:278] No container was found matching "kindnet"
	I0130 20:48:18.209716   45441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:48:18.209761   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:48:18.253810   45441 cri.go:89] found id: "f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:18.253834   45441 cri.go:89] found id: ""
	I0130 20:48:18.253841   45441 logs.go:276] 1 containers: [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06]
	I0130 20:48:18.253883   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.258404   45441 logs.go:123] Gathering logs for storage-provisioner [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06] ...
	I0130 20:48:18.258433   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:18.305088   45441 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:48:18.305117   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:48:18.629911   45441 logs.go:123] Gathering logs for container status ...
	I0130 20:48:18.629948   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:48:18.677758   45441 logs.go:123] Gathering logs for kubelet ...
	I0130 20:48:18.677787   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:48:18.779831   45441 logs.go:123] Gathering logs for dmesg ...
	I0130 20:48:18.779869   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:48:18.795995   45441 logs.go:123] Gathering logs for kube-apiserver [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481] ...
	I0130 20:48:18.796024   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:18.844003   45441 logs.go:123] Gathering logs for coredns [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb] ...
	I0130 20:48:18.844034   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:18.884617   45441 logs.go:123] Gathering logs for kube-scheduler [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7] ...
	I0130 20:48:18.884645   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:18.931556   45441 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:48:18.931591   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:48:19.066569   45441 logs.go:123] Gathering logs for etcd [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15] ...
	I0130 20:48:19.066606   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:19.115012   45441 logs.go:123] Gathering logs for kube-proxy [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe] ...
	I0130 20:48:19.115041   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:19.169107   45441 logs.go:123] Gathering logs for kube-controller-manager [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed] ...
	I0130 20:48:19.169137   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:21.731792   45441 system_pods.go:59] 8 kube-system pods found
	I0130 20:48:21.731816   45441 system_pods.go:61] "coredns-5dd5756b68-tlb8h" [547c1fe4-3ef7-421a-b460-660a05caa2ab] Running
	I0130 20:48:21.731821   45441 system_pods.go:61] "etcd-default-k8s-diff-port-877742" [a8ff44ad-5fec-415b-a574-75bce55acf8e] Running
	I0130 20:48:21.731826   45441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-877742" [b183118a-5376-412c-a991-eaebf0e6a46e] Running
	I0130 20:48:21.731830   45441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-877742" [cd5170b0-7d1c-45fd-9670-376d04e7016b] Running
	I0130 20:48:21.731834   45441 system_pods.go:61] "kube-proxy-59zvd" [ca6ef754-0898-4e1d-9ff2-9f42f456db6c] Running
	I0130 20:48:21.731838   45441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-877742" [5870d68e-b7af-408b-9484-a7e414bbe7f7] Running
	I0130 20:48:21.731845   45441 system_pods.go:61] "metrics-server-57f55c9bc5-xjc2m" [7b9a273b-d328-4ae8-925e-5bb305cfe574] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:48:21.731853   45441 system_pods.go:61] "storage-provisioner" [db1a28e4-0c45-496e-a566-32a402b0841d] Running
	I0130 20:48:21.731862   45441 system_pods.go:74] duration metric: took 3.864945598s to wait for pod list to return data ...
	I0130 20:48:21.731878   45441 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:48:21.734586   45441 default_sa.go:45] found service account: "default"
	I0130 20:48:21.734604   45441 default_sa.go:55] duration metric: took 2.721611ms for default service account to be created ...
	I0130 20:48:21.734611   45441 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:48:21.740794   45441 system_pods.go:86] 8 kube-system pods found
	I0130 20:48:21.740817   45441 system_pods.go:89] "coredns-5dd5756b68-tlb8h" [547c1fe4-3ef7-421a-b460-660a05caa2ab] Running
	I0130 20:48:21.740822   45441 system_pods.go:89] "etcd-default-k8s-diff-port-877742" [a8ff44ad-5fec-415b-a574-75bce55acf8e] Running
	I0130 20:48:21.740827   45441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-877742" [b183118a-5376-412c-a991-eaebf0e6a46e] Running
	I0130 20:48:21.740831   45441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-877742" [cd5170b0-7d1c-45fd-9670-376d04e7016b] Running
	I0130 20:48:21.740835   45441 system_pods.go:89] "kube-proxy-59zvd" [ca6ef754-0898-4e1d-9ff2-9f42f456db6c] Running
	I0130 20:48:21.740840   45441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-877742" [5870d68e-b7af-408b-9484-a7e414bbe7f7] Running
	I0130 20:48:21.740846   45441 system_pods.go:89] "metrics-server-57f55c9bc5-xjc2m" [7b9a273b-d328-4ae8-925e-5bb305cfe574] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:48:21.740853   45441 system_pods.go:89] "storage-provisioner" [db1a28e4-0c45-496e-a566-32a402b0841d] Running
	I0130 20:48:21.740860   45441 system_pods.go:126] duration metric: took 6.244006ms to wait for k8s-apps to be running ...
	I0130 20:48:21.740867   45441 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:48:21.740906   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:48:21.756380   45441 system_svc.go:56] duration metric: took 15.505755ms WaitForService to wait for kubelet.
	I0130 20:48:21.756405   45441 kubeadm.go:581] duration metric: took 4m15.894523943s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:48:21.756429   45441 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:48:21.759579   45441 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:48:21.759605   45441 node_conditions.go:123] node cpu capacity is 2
	I0130 20:48:21.759616   45441 node_conditions.go:105] duration metric: took 3.182491ms to run NodePressure ...
	I0130 20:48:21.759626   45441 start.go:228] waiting for startup goroutines ...
	I0130 20:48:21.759632   45441 start.go:233] waiting for cluster config update ...
	I0130 20:48:21.759642   45441 start.go:242] writing updated cluster config ...
	I0130 20:48:21.759879   45441 ssh_runner.go:195] Run: rm -f paused
	I0130 20:48:21.808471   45441 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 20:48:21.810628   45441 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-877742" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 20:38:58 UTC, ends at Tue 2024-01-30 20:55:25 UTC. --
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.399694184Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648125399678323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f325c26a-da3b-4a26-83e3-2f87e6ac0a0c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.400285419Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=66363a1e-cef8-4751-8377-8a4eccbd5d20 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.400335076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=66363a1e-cef8-4751-8377-8a4eccbd5d20 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.400547884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a18f05c5071cc3d5c2acc3d8ef16ae221090aa663bbb0034595edf8fe754d1c7,PodSandboxId:0bcb2ebe732effbbd1782098056d09e0461377b9f8392a575dda7e19f974b3dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647499873830054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff3eddf0-39e5-415f-b6e1-2b9324ae67f5,},Annotations:map[string]string{io.kubernetes.container.hash: cd7a3f83,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9caea105ac6df617e081cecc81dd4ba65c9a637c619374ac763237d862d8af98,PodSandboxId:8550dcc0516f94147ebec61a7ca74ca214e4a3dc445116f9066b2ea9d06ffced,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706647499516969258,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbdxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82328394-34a6-476e-994b-8469c1cd370f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c130f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15595b34a557919456566f890b263d1b2ec3d14ab43bd764942f7538fafd743c,PodSandboxId:b4e9153ebe1b813f286c161fe8b9bd7c104eb49b5d427340a0a522c12804d9e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706647498994371878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7qhmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03050fc6-39c5-45fa-8fc0-fd41a78392f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2481486e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e776ff23c68223bb4ff5b75f74ef36335945d9c5b39fba0cce4470586ca7211,PodSandboxId:5322c11b7400ef45d2bbbb8d63823a8744887bbbc6e1e06baa306bb090531f3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706647473520200673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8546a5b70f7d75b0ec40caabe5c78413,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d0a5824a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5edef1c3bea3d540bb83c092f2f450ba4092659166aa785addc4288c2d06e516,PodSandboxId:19e908cc9cdeb9d5ff9aa1c1288dcd3ef2cff34c33d908eed5a808195f231353,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706647472051324526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3acd35d56c0f1b6dea95347cb06d05926b34dca9b2d90a1a9e23e04ea99abd48,PodSandboxId:a0fb4944a8a53f907b174cea6e6754b33d5471e164eeab49382797657dbfd6a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706647471871528939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f169069135e0a2b8eb5fc8f9181,},Annotations:map[string]string{io.kubern
etes.container.hash: e97c633e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44810126fec013f866d98b5fbc8a24ce47c290be31393e33afe9433d5b72f51,PodSandboxId:4df7291754b09b207030069a01d7d857e192de5dcaa8ab49665a205db74eba90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706647471736787731,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=66363a1e-cef8-4751-8377-8a4eccbd5d20 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.443273279Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f660f2b2-44dd-4e72-8715-e5483363217a name=/runtime.v1.RuntimeService/Version
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.443331618Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f660f2b2-44dd-4e72-8715-e5483363217a name=/runtime.v1.RuntimeService/Version
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.444695802Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7ea6dc8a-8065-4850-b1a9-04a18fa43b3b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.445065196Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648125445053848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7ea6dc8a-8065-4850-b1a9-04a18fa43b3b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.445748698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5a69ba0b-30e8-4d96-9364-c5657fef565f name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.445793646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5a69ba0b-30e8-4d96-9364-c5657fef565f name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.445942485Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a18f05c5071cc3d5c2acc3d8ef16ae221090aa663bbb0034595edf8fe754d1c7,PodSandboxId:0bcb2ebe732effbbd1782098056d09e0461377b9f8392a575dda7e19f974b3dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647499873830054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff3eddf0-39e5-415f-b6e1-2b9324ae67f5,},Annotations:map[string]string{io.kubernetes.container.hash: cd7a3f83,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9caea105ac6df617e081cecc81dd4ba65c9a637c619374ac763237d862d8af98,PodSandboxId:8550dcc0516f94147ebec61a7ca74ca214e4a3dc445116f9066b2ea9d06ffced,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706647499516969258,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbdxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82328394-34a6-476e-994b-8469c1cd370f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c130f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15595b34a557919456566f890b263d1b2ec3d14ab43bd764942f7538fafd743c,PodSandboxId:b4e9153ebe1b813f286c161fe8b9bd7c104eb49b5d427340a0a522c12804d9e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706647498994371878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7qhmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03050fc6-39c5-45fa-8fc0-fd41a78392f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2481486e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e776ff23c68223bb4ff5b75f74ef36335945d9c5b39fba0cce4470586ca7211,PodSandboxId:5322c11b7400ef45d2bbbb8d63823a8744887bbbc6e1e06baa306bb090531f3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706647473520200673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8546a5b70f7d75b0ec40caabe5c78413,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d0a5824a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5edef1c3bea3d540bb83c092f2f450ba4092659166aa785addc4288c2d06e516,PodSandboxId:19e908cc9cdeb9d5ff9aa1c1288dcd3ef2cff34c33d908eed5a808195f231353,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706647472051324526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3acd35d56c0f1b6dea95347cb06d05926b34dca9b2d90a1a9e23e04ea99abd48,PodSandboxId:a0fb4944a8a53f907b174cea6e6754b33d5471e164eeab49382797657dbfd6a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706647471871528939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f169069135e0a2b8eb5fc8f9181,},Annotations:map[string]string{io.kubern
etes.container.hash: e97c633e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44810126fec013f866d98b5fbc8a24ce47c290be31393e33afe9433d5b72f51,PodSandboxId:4df7291754b09b207030069a01d7d857e192de5dcaa8ab49665a205db74eba90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706647471736787731,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5a69ba0b-30e8-4d96-9364-c5657fef565f name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.487861035Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=492744b9-0de5-4a42-a007-01b5916ed586 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.487920105Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=492744b9-0de5-4a42-a007-01b5916ed586 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.489092934Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d0443e54-c12c-4bcb-b1aa-7830471c16b0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.489557710Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648125489436887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d0443e54-c12c-4bcb-b1aa-7830471c16b0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.490010956Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6b09de22-3533-4727-97b8-bcd86b95d02f name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.490056169Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6b09de22-3533-4727-97b8-bcd86b95d02f name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.490203363Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a18f05c5071cc3d5c2acc3d8ef16ae221090aa663bbb0034595edf8fe754d1c7,PodSandboxId:0bcb2ebe732effbbd1782098056d09e0461377b9f8392a575dda7e19f974b3dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647499873830054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff3eddf0-39e5-415f-b6e1-2b9324ae67f5,},Annotations:map[string]string{io.kubernetes.container.hash: cd7a3f83,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9caea105ac6df617e081cecc81dd4ba65c9a637c619374ac763237d862d8af98,PodSandboxId:8550dcc0516f94147ebec61a7ca74ca214e4a3dc445116f9066b2ea9d06ffced,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706647499516969258,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbdxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82328394-34a6-476e-994b-8469c1cd370f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c130f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15595b34a557919456566f890b263d1b2ec3d14ab43bd764942f7538fafd743c,PodSandboxId:b4e9153ebe1b813f286c161fe8b9bd7c104eb49b5d427340a0a522c12804d9e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706647498994371878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7qhmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03050fc6-39c5-45fa-8fc0-fd41a78392f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2481486e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e776ff23c68223bb4ff5b75f74ef36335945d9c5b39fba0cce4470586ca7211,PodSandboxId:5322c11b7400ef45d2bbbb8d63823a8744887bbbc6e1e06baa306bb090531f3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706647473520200673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8546a5b70f7d75b0ec40caabe5c78413,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d0a5824a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5edef1c3bea3d540bb83c092f2f450ba4092659166aa785addc4288c2d06e516,PodSandboxId:19e908cc9cdeb9d5ff9aa1c1288dcd3ef2cff34c33d908eed5a808195f231353,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706647472051324526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3acd35d56c0f1b6dea95347cb06d05926b34dca9b2d90a1a9e23e04ea99abd48,PodSandboxId:a0fb4944a8a53f907b174cea6e6754b33d5471e164eeab49382797657dbfd6a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706647471871528939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f169069135e0a2b8eb5fc8f9181,},Annotations:map[string]string{io.kubern
etes.container.hash: e97c633e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44810126fec013f866d98b5fbc8a24ce47c290be31393e33afe9433d5b72f51,PodSandboxId:4df7291754b09b207030069a01d7d857e192de5dcaa8ab49665a205db74eba90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706647471736787731,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6b09de22-3533-4727-97b8-bcd86b95d02f name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.530786681Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8ae8258a-896e-4952-961c-3a59b908b5e7 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.530880851Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8ae8258a-896e-4952-961c-3a59b908b5e7 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.532348335Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b4f10f4c-ec27-46e6-b5aa-54e416673a24 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.532806340Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648125532791675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b4f10f4c-ec27-46e6-b5aa-54e416673a24 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.533403954Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dd21f58e-8837-4cad-bebc-3d664d052b20 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.533525011Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dd21f58e-8837-4cad-bebc-3d664d052b20 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:55:25 old-k8s-version-150971 crio[730]: time="2024-01-30 20:55:25.533718876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a18f05c5071cc3d5c2acc3d8ef16ae221090aa663bbb0034595edf8fe754d1c7,PodSandboxId:0bcb2ebe732effbbd1782098056d09e0461377b9f8392a575dda7e19f974b3dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647499873830054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff3eddf0-39e5-415f-b6e1-2b9324ae67f5,},Annotations:map[string]string{io.kubernetes.container.hash: cd7a3f83,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9caea105ac6df617e081cecc81dd4ba65c9a637c619374ac763237d862d8af98,PodSandboxId:8550dcc0516f94147ebec61a7ca74ca214e4a3dc445116f9066b2ea9d06ffced,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706647499516969258,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbdxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82328394-34a6-476e-994b-8469c1cd370f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c130f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15595b34a557919456566f890b263d1b2ec3d14ab43bd764942f7538fafd743c,PodSandboxId:b4e9153ebe1b813f286c161fe8b9bd7c104eb49b5d427340a0a522c12804d9e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706647498994371878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7qhmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03050fc6-39c5-45fa-8fc0-fd41a78392f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2481486e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e776ff23c68223bb4ff5b75f74ef36335945d9c5b39fba0cce4470586ca7211,PodSandboxId:5322c11b7400ef45d2bbbb8d63823a8744887bbbc6e1e06baa306bb090531f3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706647473520200673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8546a5b70f7d75b0ec40caabe5c78413,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d0a5824a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5edef1c3bea3d540bb83c092f2f450ba4092659166aa785addc4288c2d06e516,PodSandboxId:19e908cc9cdeb9d5ff9aa1c1288dcd3ef2cff34c33d908eed5a808195f231353,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706647472051324526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3acd35d56c0f1b6dea95347cb06d05926b34dca9b2d90a1a9e23e04ea99abd48,PodSandboxId:a0fb4944a8a53f907b174cea6e6754b33d5471e164eeab49382797657dbfd6a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706647471871528939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f169069135e0a2b8eb5fc8f9181,},Annotations:map[string]string{io.kubern
etes.container.hash: e97c633e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44810126fec013f866d98b5fbc8a24ce47c290be31393e33afe9433d5b72f51,PodSandboxId:4df7291754b09b207030069a01d7d857e192de5dcaa8ab49665a205db74eba90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706647471736787731,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dd21f58e-8837-4cad-bebc-3d664d052b20 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a18f05c5071cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   0bcb2ebe732ef       storage-provisioner
	9caea105ac6df       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   8550dcc0516f9       kube-proxy-zbdxm
	15595b34a5579       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   b4e9153ebe1b8       coredns-5644d7b6d9-7qhmc
	9e776ff23c682       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   5322c11b7400e       etcd-old-k8s-version-150971
	5edef1c3bea3d       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   19e908cc9cdeb       kube-scheduler-old-k8s-version-150971
	3acd35d56c0f1       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            0                   a0fb4944a8a53       kube-apiserver-old-k8s-version-150971
	c44810126fec0       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   4df7291754b09       kube-controller-manager-old-k8s-version-150971
	
	
	==> coredns [15595b34a557919456566f890b263d1b2ec3d14ab43bd764942f7538fafd743c] <==
	.:53
	2024-01-30T20:44:59.299Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2024-01-30T20:44:59.299Z [INFO] CoreDNS-1.6.2
	2024-01-30T20:44:59.299Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-30T20:44:59.314Z [INFO] 127.0.0.1:56994 - 48072 "HINFO IN 7427872625022628517.7266604427229779466. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015095819s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-150971
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-150971
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218
	                    minikube.k8s.io/name=old-k8s-version-150971
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T20_44_42_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 20:44:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 20:54:37 +0000   Tue, 30 Jan 2024 20:44:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 20:54:37 +0000   Tue, 30 Jan 2024 20:44:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 20:54:37 +0000   Tue, 30 Jan 2024 20:44:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 20:54:37 +0000   Tue, 30 Jan 2024 20:44:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.16
	  Hostname:    old-k8s-version-150971
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 2bfe980287ab43929699b829e9c9d14b
	 System UUID:                2bfe9802-87ab-4392-9699-b829e9c9d14b
	 Boot ID:                    0d16b4b8-7f22-45dc-9866-42e9c8b3f5ef
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-7qhmc                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-150971                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m27s
	  kube-system                kube-apiserver-old-k8s-version-150971             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m34s
	  kube-system                kube-controller-manager-old-k8s-version-150971    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m25s
	  kube-system                kube-proxy-zbdxm                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-150971             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	  kube-system                metrics-server-74d5856cc6-22948                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-150971     Node old-k8s-version-150971 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet, old-k8s-version-150971     Node old-k8s-version-150971 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet, old-k8s-version-150971     Node old-k8s-version-150971 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-150971  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan30 20:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070534] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.759042] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.277267] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.145830] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000008] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jan30 20:39] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.975604] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.130209] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.163264] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.147872] systemd-fstab-generator[691]: Ignoring "noauto" for root device
	[  +0.263126] systemd-fstab-generator[715]: Ignoring "noauto" for root device
	[ +19.175912] systemd-fstab-generator[1038]: Ignoring "noauto" for root device
	[  +0.419326] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +25.227125] kauditd_printk_skb: 20 callbacks suppressed
	[Jan30 20:40] hrtimer: interrupt took 4310528 ns
	[Jan30 20:44] systemd-fstab-generator[3200]: Ignoring "noauto" for root device
	[  +0.667856] kauditd_printk_skb: 8 callbacks suppressed
	[Jan30 20:45] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [9e776ff23c68223bb4ff5b75f74ef36335945d9c5b39fba0cce4470586ca7211] <==
	2024-01-30 20:44:33.644289 I | raft: b6c76b3131c1024 became follower at term 0
	2024-01-30 20:44:33.644308 I | raft: newRaft b6c76b3131c1024 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2024-01-30 20:44:33.644323 I | raft: b6c76b3131c1024 became follower at term 1
	2024-01-30 20:44:33.651330 W | auth: simple token is not cryptographically signed
	2024-01-30 20:44:33.656676 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-30 20:44:33.657611 I | etcdserver: b6c76b3131c1024 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2024-01-30 20:44:33.657916 I | etcdserver/membership: added member b6c76b3131c1024 [https://192.168.39.16:2380] to cluster cad58bbf0f3daddf
	2024-01-30 20:44:33.659560 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-30 20:44:33.660032 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-30 20:44:33.660166 I | embed: listening for metrics on http://192.168.39.16:2381
	2024-01-30 20:44:34.044765 I | raft: b6c76b3131c1024 is starting a new election at term 1
	2024-01-30 20:44:34.044906 I | raft: b6c76b3131c1024 became candidate at term 2
	2024-01-30 20:44:34.045062 I | raft: b6c76b3131c1024 received MsgVoteResp from b6c76b3131c1024 at term 2
	2024-01-30 20:44:34.045189 I | raft: b6c76b3131c1024 became leader at term 2
	2024-01-30 20:44:34.045285 I | raft: raft.node: b6c76b3131c1024 elected leader b6c76b3131c1024 at term 2
	2024-01-30 20:44:34.045674 I | etcdserver: setting up the initial cluster version to 3.3
	2024-01-30 20:44:34.047297 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-30 20:44:34.047343 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-30 20:44:34.047377 I | etcdserver: published {Name:old-k8s-version-150971 ClientURLs:[https://192.168.39.16:2379]} to cluster cad58bbf0f3daddf
	2024-01-30 20:44:34.047383 I | embed: ready to serve client requests
	2024-01-30 20:44:34.047784 I | embed: ready to serve client requests
	2024-01-30 20:44:34.048719 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-30 20:44:34.050922 I | embed: serving client requests on 192.168.39.16:2379
	2024-01-30 20:54:34.066909 I | mvcc: store.index: compact 664
	2024-01-30 20:54:34.070586 I | mvcc: finished scheduled compaction at 664 (took 3.171275ms)
	
	
	==> kernel <==
	 20:55:25 up 16 min,  0 users,  load average: 0.12, 0.17, 0.16
	Linux old-k8s-version-150971 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [3acd35d56c0f1b6dea95347cb06d05926b34dca9b2d90a1a9e23e04ea99abd48] <==
	I0130 20:48:00.417053       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 20:48:00.417359       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 20:48:00.417442       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:48:00.417571       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 20:49:38.305526       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 20:49:38.305616       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 20:49:38.305669       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:49:38.305677       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 20:50:38.306188       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 20:50:38.306542       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 20:50:38.306623       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:50:38.306636       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 20:52:38.307157       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 20:52:38.307629       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 20:52:38.307717       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:52:38.307743       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 20:54:38.308883       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 20:54:38.309168       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 20:54:38.309257       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:54:38.309279       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c44810126fec013f866d98b5fbc8a24ce47c290be31393e33afe9433d5b72f51] <==
	E0130 20:48:59.284936       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:49:13.252022       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:49:29.537367       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:49:45.254693       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:49:59.790035       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:50:17.256650       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:50:30.042843       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:50:49.258592       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:51:00.295159       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:51:21.260817       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:51:30.547201       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:51:53.264663       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:52:00.799586       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:52:25.266772       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:52:31.051566       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:52:57.269679       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:53:01.303524       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:53:29.271844       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:53:31.555398       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:54:01.274155       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:54:01.807212       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0130 20:54:32.060322       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:54:33.276548       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:55:02.312382       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:55:05.278879       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [9caea105ac6df617e081cecc81dd4ba65c9a637c619374ac763237d862d8af98] <==
	W0130 20:44:59.896676       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0130 20:44:59.949784       1 node.go:135] Successfully retrieved node IP: 192.168.39.16
	I0130 20:44:59.949847       1 server_others.go:149] Using iptables Proxier.
	I0130 20:44:59.963781       1 server.go:529] Version: v1.16.0
	I0130 20:44:59.974254       1 config.go:131] Starting endpoints config controller
	I0130 20:44:59.976908       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0130 20:44:59.977182       1 config.go:313] Starting service config controller
	I0130 20:44:59.977418       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0130 20:45:00.093002       1 shared_informer.go:204] Caches are synced for service config 
	I0130 20:45:00.093282       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [5edef1c3bea3d540bb83c092f2f450ba4092659166aa785addc4288c2d06e516] <==
	I0130 20:44:37.306193       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0130 20:44:37.307006       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0130 20:44:37.366779       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0130 20:44:37.367041       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0130 20:44:37.367216       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0130 20:44:37.367302       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0130 20:44:37.367362       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0130 20:44:37.367423       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0130 20:44:37.367574       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0130 20:44:37.372780       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 20:44:37.372868       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0130 20:44:37.376604       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0130 20:44:37.376797       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 20:44:38.369553       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0130 20:44:38.373627       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0130 20:44:38.376603       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0130 20:44:38.377947       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0130 20:44:38.379959       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0130 20:44:38.381126       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0130 20:44:38.385575       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0130 20:44:38.386563       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 20:44:38.387557       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0130 20:44:38.389008       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0130 20:44:38.392386       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 20:44:56.846806       1 factory.go:585] pod is already present in the activeQ
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 20:38:58 UTC, ends at Tue 2024-01-30 20:55:26 UTC. --
	Jan 30 20:50:36 old-k8s-version-150971 kubelet[3206]: E0130 20:50:36.421993    3206 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 30 20:50:36 old-k8s-version-150971 kubelet[3206]: E0130 20:50:36.422018    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 30 20:50:51 old-k8s-version-150971 kubelet[3206]: E0130 20:50:51.398362    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:51:06 old-k8s-version-150971 kubelet[3206]: E0130 20:51:06.399321    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:51:21 old-k8s-version-150971 kubelet[3206]: E0130 20:51:21.398433    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:51:34 old-k8s-version-150971 kubelet[3206]: E0130 20:51:34.398376    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:51:45 old-k8s-version-150971 kubelet[3206]: E0130 20:51:45.398597    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:51:56 old-k8s-version-150971 kubelet[3206]: E0130 20:51:56.400716    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:52:08 old-k8s-version-150971 kubelet[3206]: E0130 20:52:08.399237    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:52:22 old-k8s-version-150971 kubelet[3206]: E0130 20:52:22.400501    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:52:33 old-k8s-version-150971 kubelet[3206]: E0130 20:52:33.398359    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:52:46 old-k8s-version-150971 kubelet[3206]: E0130 20:52:46.399009    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:53:00 old-k8s-version-150971 kubelet[3206]: E0130 20:53:00.401058    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:53:13 old-k8s-version-150971 kubelet[3206]: E0130 20:53:13.398289    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:53:27 old-k8s-version-150971 kubelet[3206]: E0130 20:53:27.398873    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:53:42 old-k8s-version-150971 kubelet[3206]: E0130 20:53:42.399624    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:53:55 old-k8s-version-150971 kubelet[3206]: E0130 20:53:55.399991    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:54:06 old-k8s-version-150971 kubelet[3206]: E0130 20:54:06.398229    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:54:19 old-k8s-version-150971 kubelet[3206]: E0130 20:54:19.398632    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:54:30 old-k8s-version-150971 kubelet[3206]: E0130 20:54:30.506299    3206 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 30 20:54:33 old-k8s-version-150971 kubelet[3206]: E0130 20:54:33.398293    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:54:44 old-k8s-version-150971 kubelet[3206]: E0130 20:54:44.399863    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:54:57 old-k8s-version-150971 kubelet[3206]: E0130 20:54:57.398741    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:55:12 old-k8s-version-150971 kubelet[3206]: E0130 20:55:12.399663    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:55:23 old-k8s-version-150971 kubelet[3206]: E0130 20:55:23.398565    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [a18f05c5071cc3d5c2acc3d8ef16ae221090aa663bbb0034595edf8fe754d1c7] <==
	I0130 20:45:00.174576       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 20:45:00.190353       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 20:45:00.192193       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 20:45:00.207961       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 20:45:00.208954       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-150971_ba0f01f0-87aa-4fec-9246-93103f198f70!
	I0130 20:45:00.218859       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"78efc787-05b9-458a-b56a-6a3ffd7f6b0a", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-150971_ba0f01f0-87aa-4fec-9246-93103f198f70 became leader
	I0130 20:45:00.309893       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-150971_ba0f01f0-87aa-4fec-9246-93103f198f70!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-150971 -n old-k8s-version-150971
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-150971 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-22948
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-150971 describe pod metrics-server-74d5856cc6-22948
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-150971 describe pod metrics-server-74d5856cc6-22948: exit status 1 (70.677214ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-22948" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-150971 describe pod metrics-server-74d5856cc6-22948: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0130 20:48:39.710870   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
E0130 20:49:30.821076   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 20:51:31.181484   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-877742 -n default-k8s-diff-port-877742
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-30 20:57:22.399099245 +0000 UTC m=+5679.476069010
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-877742 -n default-k8s-diff-port-877742
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-877742 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-877742 logs -n 25: (1.666039164s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:28 UTC | 30 Jan 24 20:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:28 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| pause   | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-757744 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | disable-driver-mounts-757744                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:31 UTC |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-473743             | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-473743                                   | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-208583            | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:31 UTC | 30 Jan 24 20:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:31 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-877742  | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:32 UTC | 30 Jan 24 20:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:32 UTC |                     |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-473743                  | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-208583                 | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-473743                                   | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:44 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-150971        | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-877742       | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:34 UTC | 30 Jan 24 20:48 UTC |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-150971             | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:36 UTC | 30 Jan 24 20:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 20:36:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 20:36:09.643751   45819 out.go:296] Setting OutFile to fd 1 ...
	I0130 20:36:09.644027   45819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:36:09.644038   45819 out.go:309] Setting ErrFile to fd 2...
	I0130 20:36:09.644045   45819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:36:09.644230   45819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 20:36:09.644766   45819 out.go:303] Setting JSON to false
	I0130 20:36:09.645668   45819 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4715,"bootTime":1706642255,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 20:36:09.645727   45819 start.go:138] virtualization: kvm guest
	I0130 20:36:09.648102   45819 out.go:177] * [old-k8s-version-150971] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 20:36:09.649772   45819 out.go:177]   - MINIKUBE_LOCATION=18007
	I0130 20:36:09.651000   45819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 20:36:09.649826   45819 notify.go:220] Checking for updates...
	I0130 20:36:09.653462   45819 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:36:09.654761   45819 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 20:36:09.655939   45819 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 20:36:09.657140   45819 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 20:36:09.658638   45819 config.go:182] Loaded profile config "old-k8s-version-150971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 20:36:09.659027   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:36:09.659066   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:36:09.672985   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39323
	I0130 20:36:09.673381   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:36:09.673876   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:36:09.673897   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:36:09.674191   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:36:09.674351   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:36:09.676038   45819 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0130 20:36:09.677315   45819 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 20:36:09.677582   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:36:09.677630   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:36:09.691259   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0130 20:36:09.691604   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:36:09.692060   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:36:09.692089   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:36:09.692371   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:36:09.692555   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:36:09.726172   45819 out.go:177] * Using the kvm2 driver based on existing profile
	I0130 20:36:09.727421   45819 start.go:298] selected driver: kvm2
	I0130 20:36:09.727433   45819 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-150971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:36:09.727546   45819 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 20:36:09.728186   45819 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 20:36:09.728255   45819 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18007-4458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 20:36:09.742395   45819 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 20:36:09.742715   45819 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 20:36:09.742771   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:36:09.742784   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:36:09.742794   45819 start_flags.go:321] config:
	{Name:old-k8s-version-150971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:36:09.742977   45819 iso.go:125] acquiring lock: {Name:mk072ab123730f3058e85a91672f85e887bd47af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 20:36:09.745577   45819 out.go:177] * Starting control plane node old-k8s-version-150971 in cluster old-k8s-version-150971
	I0130 20:36:10.483495   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:09.746820   45819 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 20:36:09.746852   45819 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0130 20:36:09.746865   45819 cache.go:56] Caching tarball of preloaded images
	I0130 20:36:09.746951   45819 preload.go:174] Found /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 20:36:09.746960   45819 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0130 20:36:09.747061   45819 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/config.json ...
	I0130 20:36:09.747229   45819 start.go:365] acquiring machines lock for old-k8s-version-150971: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 20:36:13.555547   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:19.635533   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:22.707498   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:28.787473   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:31.859544   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:37.939524   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:41.011456   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:47.091510   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:50.163505   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:56.243497   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:59.315474   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:05.395536   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:08.467514   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:14.547517   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:17.619561   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:23.699509   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:26.771568   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:32.851483   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:35.923502   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:42.003515   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:45.075526   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:51.155512   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:54.227514   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:38:00.307532   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:38:03.311451   45037 start.go:369] acquired machines lock for "embed-certs-208583" in 4m29.471089592s
	I0130 20:38:03.311507   45037 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:38:03.311514   45037 fix.go:54] fixHost starting: 
	I0130 20:38:03.311893   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:03.311933   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:03.326477   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0130 20:38:03.326949   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:03.327373   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:03.327403   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:03.327758   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:03.327946   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:03.328115   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:03.329604   45037 fix.go:102] recreateIfNeeded on embed-certs-208583: state=Stopped err=<nil>
	I0130 20:38:03.329646   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	W0130 20:38:03.329810   45037 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:38:03.331493   45037 out.go:177] * Restarting existing kvm2 VM for "embed-certs-208583" ...
	I0130 20:38:03.332735   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Start
	I0130 20:38:03.332862   45037 main.go:141] libmachine: (embed-certs-208583) Ensuring networks are active...
	I0130 20:38:03.333514   45037 main.go:141] libmachine: (embed-certs-208583) Ensuring network default is active
	I0130 20:38:03.333859   45037 main.go:141] libmachine: (embed-certs-208583) Ensuring network mk-embed-certs-208583 is active
	I0130 20:38:03.334154   45037 main.go:141] libmachine: (embed-certs-208583) Getting domain xml...
	I0130 20:38:03.334860   45037 main.go:141] libmachine: (embed-certs-208583) Creating domain...
	I0130 20:38:03.309254   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:38:03.309293   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:38:03.311318   44923 machine.go:91] provisioned docker machine in 4m37.382925036s
	I0130 20:38:03.311359   44923 fix.go:56] fixHost completed within 4m37.403399512s
	I0130 20:38:03.311364   44923 start.go:83] releasing machines lock for "no-preload-473743", held for 4m37.403435936s
	W0130 20:38:03.311387   44923 start.go:694] error starting host: provision: host is not running
	W0130 20:38:03.311504   44923 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0130 20:38:03.311518   44923 start.go:709] Will try again in 5 seconds ...
	I0130 20:38:04.507963   45037 main.go:141] libmachine: (embed-certs-208583) Waiting to get IP...
	I0130 20:38:04.508755   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:04.509133   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:04.509207   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:04.509115   46132 retry.go:31] will retry after 189.527185ms: waiting for machine to come up
	I0130 20:38:04.700560   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:04.701193   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:04.701223   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:04.701137   46132 retry.go:31] will retry after 239.29825ms: waiting for machine to come up
	I0130 20:38:04.941612   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:04.942080   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:04.942116   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:04.942040   46132 retry.go:31] will retry after 388.672579ms: waiting for machine to come up
	I0130 20:38:05.332617   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:05.333018   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:05.333041   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:05.332968   46132 retry.go:31] will retry after 525.5543ms: waiting for machine to come up
	I0130 20:38:05.859677   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:05.860094   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:05.860126   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:05.860055   46132 retry.go:31] will retry after 595.87535ms: waiting for machine to come up
	I0130 20:38:06.457828   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:06.458220   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:06.458244   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:06.458197   46132 retry.go:31] will retry after 766.148522ms: waiting for machine to come up
	I0130 20:38:07.226151   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:07.226615   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:07.226652   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:07.226558   46132 retry.go:31] will retry after 843.449223ms: waiting for machine to come up
	I0130 20:38:08.070983   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:08.071381   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:08.071407   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:08.071338   46132 retry.go:31] will retry after 1.079839146s: waiting for machine to come up
	I0130 20:38:08.313897   44923 start.go:365] acquiring machines lock for no-preload-473743: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 20:38:09.152768   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:09.153087   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:09.153113   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:09.153034   46132 retry.go:31] will retry after 1.855245571s: waiting for machine to come up
	I0130 20:38:11.010893   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:11.011260   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:11.011299   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:11.011196   46132 retry.go:31] will retry after 2.159062372s: waiting for machine to come up
	I0130 20:38:13.172734   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:13.173144   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:13.173173   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:13.173106   46132 retry.go:31] will retry after 2.73165804s: waiting for machine to come up
	I0130 20:38:15.908382   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:15.908803   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:15.908834   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:15.908732   46132 retry.go:31] will retry after 3.268718285s: waiting for machine to come up
	I0130 20:38:23.603972   45441 start.go:369] acquired machines lock for "default-k8s-diff-port-877742" in 3m48.064811183s
	I0130 20:38:23.604051   45441 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:38:23.604061   45441 fix.go:54] fixHost starting: 
	I0130 20:38:23.604420   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:23.604456   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:23.620189   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34493
	I0130 20:38:23.620538   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:23.621035   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:38:23.621073   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:23.621415   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:23.621584   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:23.621739   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:38:23.623158   45441 fix.go:102] recreateIfNeeded on default-k8s-diff-port-877742: state=Stopped err=<nil>
	I0130 20:38:23.623185   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	W0130 20:38:23.623382   45441 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:38:23.625974   45441 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-877742" ...
	I0130 20:38:19.178930   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:19.179358   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:19.179389   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:19.179300   46132 retry.go:31] will retry after 3.117969425s: waiting for machine to come up
	I0130 20:38:22.300539   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.300957   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has current primary IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.300982   45037 main.go:141] libmachine: (embed-certs-208583) Found IP for machine: 192.168.61.63
	I0130 20:38:22.300997   45037 main.go:141] libmachine: (embed-certs-208583) Reserving static IP address...
	I0130 20:38:22.301371   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "embed-certs-208583", mac: "52:54:00:43:f2:e1", ip: "192.168.61.63"} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.301395   45037 main.go:141] libmachine: (embed-certs-208583) Reserved static IP address: 192.168.61.63
	I0130 20:38:22.301409   45037 main.go:141] libmachine: (embed-certs-208583) DBG | skip adding static IP to network mk-embed-certs-208583 - found existing host DHCP lease matching {name: "embed-certs-208583", mac: "52:54:00:43:f2:e1", ip: "192.168.61.63"}
	I0130 20:38:22.301420   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Getting to WaitForSSH function...
	I0130 20:38:22.301436   45037 main.go:141] libmachine: (embed-certs-208583) Waiting for SSH to be available...
	I0130 20:38:22.303472   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.303820   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.303842   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.303968   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Using SSH client type: external
	I0130 20:38:22.304011   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa (-rw-------)
	I0130 20:38:22.304042   45037 main.go:141] libmachine: (embed-certs-208583) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:38:22.304052   45037 main.go:141] libmachine: (embed-certs-208583) DBG | About to run SSH command:
	I0130 20:38:22.304065   45037 main.go:141] libmachine: (embed-certs-208583) DBG | exit 0
	I0130 20:38:22.398610   45037 main.go:141] libmachine: (embed-certs-208583) DBG | SSH cmd err, output: <nil>: 
	I0130 20:38:22.398945   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetConfigRaw
	I0130 20:38:22.399605   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:22.402157   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.402531   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.402569   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.402759   45037 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/config.json ...
	I0130 20:38:22.402974   45037 machine.go:88] provisioning docker machine ...
	I0130 20:38:22.402999   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:22.403238   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetMachineName
	I0130 20:38:22.403440   45037 buildroot.go:166] provisioning hostname "embed-certs-208583"
	I0130 20:38:22.403462   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetMachineName
	I0130 20:38:22.403642   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.405694   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.406026   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.406055   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.406180   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:22.406429   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.406599   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.406734   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:22.406904   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:22.407422   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:22.407446   45037 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208583 && echo "embed-certs-208583" | sudo tee /etc/hostname
	I0130 20:38:22.548206   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208583
	
	I0130 20:38:22.548240   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.550933   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.551316   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.551345   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.551492   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:22.551690   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.551821   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.551934   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:22.552129   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:22.552425   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:22.552441   45037 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208583' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208583/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208583' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:38:22.687464   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:38:22.687491   45037 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:38:22.687536   45037 buildroot.go:174] setting up certificates
	I0130 20:38:22.687551   45037 provision.go:83] configureAuth start
	I0130 20:38:22.687562   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetMachineName
	I0130 20:38:22.687813   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:22.690307   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.690664   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.690686   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.690855   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.693139   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.693426   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.693462   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.693597   45037 provision.go:138] copyHostCerts
	I0130 20:38:22.693667   45037 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:38:22.693686   45037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:38:22.693766   45037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:38:22.693866   45037 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:38:22.693876   45037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:38:22.693912   45037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:38:22.693986   45037 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:38:22.693997   45037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:38:22.694036   45037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:38:22.694122   45037 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208583 san=[192.168.61.63 192.168.61.63 localhost 127.0.0.1 minikube embed-certs-208583]
	I0130 20:38:22.862847   45037 provision.go:172] copyRemoteCerts
	I0130 20:38:22.862902   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:38:22.862921   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.865533   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.865812   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.865842   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.866006   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:22.866200   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.866315   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:22.866496   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:22.959746   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:38:22.982164   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 20:38:23.004087   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 20:38:23.025875   45037 provision.go:86] duration metric: configureAuth took 338.306374ms
	I0130 20:38:23.025896   45037 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:38:23.026090   45037 config.go:182] Loaded profile config "embed-certs-208583": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:38:23.026173   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.028688   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.028913   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.028946   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.029125   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.029277   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.029430   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.029550   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.029679   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:23.029980   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:23.029995   45037 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:38:23.337986   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:38:23.338008   45037 machine.go:91] provisioned docker machine in 935.018208ms
	I0130 20:38:23.338016   45037 start.go:300] post-start starting for "embed-certs-208583" (driver="kvm2")
	I0130 20:38:23.338026   45037 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:38:23.338051   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.338301   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:38:23.338327   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.341005   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.341398   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.341429   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.341516   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.341686   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.341825   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.341997   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:23.437500   45037 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:38:23.441705   45037 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:38:23.441724   45037 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:38:23.441784   45037 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:38:23.441851   45037 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:38:23.441937   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:38:23.450700   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:23.471898   45037 start.go:303] post-start completed in 133.870929ms
	I0130 20:38:23.471916   45037 fix.go:56] fixHost completed within 20.160401625s
	I0130 20:38:23.471940   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.474341   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.474659   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.474695   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.474793   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.474984   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.475181   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.475341   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.475515   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:23.475878   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:23.475891   45037 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:38:23.603819   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647103.552984334
	
	I0130 20:38:23.603841   45037 fix.go:206] guest clock: 1706647103.552984334
	I0130 20:38:23.603848   45037 fix.go:219] Guest: 2024-01-30 20:38:23.552984334 +0000 UTC Remote: 2024-01-30 20:38:23.471920461 +0000 UTC m=+289.780929635 (delta=81.063873ms)
	I0130 20:38:23.603879   45037 fix.go:190] guest clock delta is within tolerance: 81.063873ms
	I0130 20:38:23.603885   45037 start.go:83] releasing machines lock for "embed-certs-208583", held for 20.292396099s
	I0130 20:38:23.603916   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.604168   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:23.606681   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.607027   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.607060   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.607190   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.607703   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.607876   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.607947   45037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:38:23.607999   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.608115   45037 ssh_runner.go:195] Run: cat /version.json
	I0130 20:38:23.608140   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.610693   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611052   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.611078   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611154   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611199   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.611380   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.611530   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.611585   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.611625   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611666   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:23.611790   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.611935   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.612081   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.612197   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:23.725868   45037 ssh_runner.go:195] Run: systemctl --version
	I0130 20:38:23.731516   45037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:38:23.872093   45037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:38:23.878418   45037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:38:23.878493   45037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:38:23.892910   45037 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:38:23.892934   45037 start.go:475] detecting cgroup driver to use...
	I0130 20:38:23.893007   45037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:38:23.905950   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:38:23.917437   45037 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:38:23.917484   45037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:38:23.929241   45037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:38:23.940979   45037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:38:24.045106   45037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:38:24.160413   45037 docker.go:233] disabling docker service ...
	I0130 20:38:24.160486   45037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:38:24.173684   45037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:38:24.185484   45037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:38:24.308292   45037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:38:24.430021   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:38:24.442910   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:38:24.460145   45037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:38:24.460211   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.469163   45037 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:38:24.469225   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.478396   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.487374   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.496306   45037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:38:24.505283   45037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:38:24.512919   45037 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:38:24.512974   45037 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:38:24.523939   45037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:38:24.533002   45037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:38:24.665917   45037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:38:24.839797   45037 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:38:24.839866   45037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:38:24.851397   45037 start.go:543] Will wait 60s for crictl version
	I0130 20:38:24.851454   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:38:24.855227   45037 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:38:24.888083   45037 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:38:24.888163   45037 ssh_runner.go:195] Run: crio --version
	I0130 20:38:24.934626   45037 ssh_runner.go:195] Run: crio --version
	I0130 20:38:24.984233   45037 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 20:38:23.627365   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Start
	I0130 20:38:23.627532   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Ensuring networks are active...
	I0130 20:38:23.628247   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Ensuring network default is active
	I0130 20:38:23.628650   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Ensuring network mk-default-k8s-diff-port-877742 is active
	I0130 20:38:23.629109   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Getting domain xml...
	I0130 20:38:23.629715   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Creating domain...
	I0130 20:38:24.849156   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting to get IP...
	I0130 20:38:24.850261   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:24.850701   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:24.850729   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:24.850645   46249 retry.go:31] will retry after 259.328149ms: waiting for machine to come up
	I0130 20:38:25.112451   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.112941   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.112971   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:25.112905   46249 retry.go:31] will retry after 283.994822ms: waiting for machine to come up
	I0130 20:38:25.398452   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.398937   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.398968   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:25.398904   46249 retry.go:31] will retry after 348.958329ms: waiting for machine to come up
	I0130 20:38:24.985681   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:24.988666   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:24.989016   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:24.989042   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:24.989288   45037 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0130 20:38:24.993626   45037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:25.005749   45037 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 20:38:25.005817   45037 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:25.047605   45037 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 20:38:25.047674   45037 ssh_runner.go:195] Run: which lz4
	I0130 20:38:25.051662   45037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 20:38:25.055817   45037 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:38:25.055849   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 20:38:26.895244   45037 crio.go:444] Took 1.843605 seconds to copy over tarball
	I0130 20:38:26.895332   45037 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 20:38:25.749560   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.750020   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.750048   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:25.749985   46249 retry.go:31] will retry after 597.656366ms: waiting for machine to come up
	I0130 20:38:26.349518   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.349957   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.350004   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:26.349929   46249 retry.go:31] will retry after 600.926171ms: waiting for machine to come up
	I0130 20:38:26.952713   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.953319   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.953343   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:26.953276   46249 retry.go:31] will retry after 654.976543ms: waiting for machine to come up
	I0130 20:38:27.610017   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:27.610464   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:27.610494   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:27.610413   46249 retry.go:31] will retry after 881.075627ms: waiting for machine to come up
	I0130 20:38:28.493641   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:28.494188   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:28.494218   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:28.494136   46249 retry.go:31] will retry after 1.436302447s: waiting for machine to come up
	I0130 20:38:29.932271   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:29.932794   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:29.932825   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:29.932729   46249 retry.go:31] will retry after 1.394659615s: waiting for machine to come up
	I0130 20:38:29.834721   45037 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.939351369s)
	I0130 20:38:29.834746   45037 crio.go:451] Took 2.939470 seconds to extract the tarball
	I0130 20:38:29.834754   45037 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 20:38:29.875618   45037 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:29.921569   45037 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 20:38:29.921593   45037 cache_images.go:84] Images are preloaded, skipping loading
	I0130 20:38:29.921661   45037 ssh_runner.go:195] Run: crio config
	I0130 20:38:29.981565   45037 cni.go:84] Creating CNI manager for ""
	I0130 20:38:29.981590   45037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:38:29.981612   45037 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:38:29.981637   45037 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.63 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-208583 NodeName:embed-certs-208583 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:38:29.981824   45037 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-208583"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:38:29.981919   45037 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-208583 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-208583 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:38:29.981984   45037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 20:38:29.991601   45037 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:38:29.991665   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:38:30.000815   45037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0130 20:38:30.016616   45037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:38:30.032999   45037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0130 20:38:30.052735   45037 ssh_runner.go:195] Run: grep 192.168.61.63	control-plane.minikube.internal$ /etc/hosts
	I0130 20:38:30.057008   45037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:30.069968   45037 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583 for IP: 192.168.61.63
	I0130 20:38:30.070004   45037 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:30.070164   45037 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:38:30.070201   45037 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:38:30.070263   45037 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/client.key
	I0130 20:38:30.070323   45037 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/apiserver.key.9879da99
	I0130 20:38:30.070370   45037 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/proxy-client.key
	I0130 20:38:30.070496   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:38:30.070531   45037 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:38:30.070541   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:38:30.070561   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:38:30.070586   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:38:30.070612   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:38:30.070659   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:30.071211   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:38:30.098665   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 20:38:30.125013   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:38:30.150013   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 20:38:30.177206   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:38:30.202683   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:38:30.225774   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:38:30.249090   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:38:30.274681   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:38:30.302316   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:38:30.326602   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:38:30.351136   45037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:38:30.368709   45037 ssh_runner.go:195] Run: openssl version
	I0130 20:38:30.374606   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:38:30.386421   45037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:38:30.391240   45037 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:38:30.391314   45037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:38:30.397082   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:38:30.409040   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:38:30.420910   45037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:30.425929   45037 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:30.425971   45037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:30.431609   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:38:30.443527   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:38:30.455200   45037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:38:30.460242   45037 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:38:30.460307   45037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:38:30.466225   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:38:30.479406   45037 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:38:30.485331   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:38:30.493468   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:38:30.499465   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:38:30.505394   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:38:30.511152   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:38:30.516951   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:38:30.522596   45037 kubeadm.go:404] StartCluster: {Name:embed-certs-208583 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-208583 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.63 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:38:30.522698   45037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:38:30.522747   45037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:38:30.559669   45037 cri.go:89] found id: ""
	I0130 20:38:30.559740   45037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:38:30.571465   45037 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:38:30.571487   45037 kubeadm.go:636] restartCluster start
	I0130 20:38:30.571539   45037 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:38:30.581398   45037 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:30.582366   45037 kubeconfig.go:92] found "embed-certs-208583" server: "https://192.168.61.63:8443"
	I0130 20:38:30.584719   45037 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:38:30.593986   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:30.594031   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:30.606926   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:31.094476   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:31.094545   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:31.106991   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:31.594553   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:31.594633   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:31.607554   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:32.094029   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:32.094114   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:32.107447   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:32.594998   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:32.595079   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:32.607929   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:33.094468   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:33.094562   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:33.111525   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:33.594502   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:33.594578   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:33.611216   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:31.329366   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:31.329720   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:31.329739   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:31.329672   46249 retry.go:31] will retry after 1.8606556s: waiting for machine to come up
	I0130 20:38:33.192538   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:33.192916   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:33.192938   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:33.192873   46249 retry.go:31] will retry after 2.294307307s: waiting for machine to come up
	I0130 20:38:34.094151   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:34.094223   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:34.106531   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:34.594098   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:34.594172   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:34.606286   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:35.094891   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:35.094995   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:35.106949   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:35.594452   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:35.594532   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:35.611066   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:36.094606   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:36.094684   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:36.110348   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:36.595021   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:36.595084   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:36.609884   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:37.094347   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:37.094445   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:37.106709   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:37.594248   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:37.594348   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:37.610367   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:38.095063   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:38.095141   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:38.107195   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:38.594024   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:38.594139   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:38.606041   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:35.489701   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:35.490129   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:35.490166   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:35.490071   46249 retry.go:31] will retry after 2.434575636s: waiting for machine to come up
	I0130 20:38:37.927709   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:37.928168   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:37.928198   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:37.928111   46249 retry.go:31] will retry after 3.073200884s: waiting for machine to come up
	I0130 20:38:39.094490   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:39.094572   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:39.106154   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:39.594866   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:39.594961   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:39.606937   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:40.094464   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:40.094549   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:40.106068   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:40.594556   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:40.594637   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:40.606499   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:40.606523   45037 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:38:40.606544   45037 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:38:40.606554   45037 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:38:40.606605   45037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:38:40.646444   45037 cri.go:89] found id: ""
	I0130 20:38:40.646505   45037 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:38:40.661886   45037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:38:40.670948   45037 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:38:40.671008   45037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:38:40.679749   45037 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:38:40.679771   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:40.780597   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:41.804175   45037 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.023537725s)
	I0130 20:38:41.804214   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:41.999624   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:42.103064   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:42.173522   45037 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:38:42.173628   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:42.674417   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:43.173996   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:43.674137   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:41.004686   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:41.005140   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:41.005165   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:41.005085   46249 retry.go:31] will retry after 3.766414086s: waiting for machine to come up
	I0130 20:38:44.773568   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.774049   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Found IP for machine: 192.168.72.52
	I0130 20:38:44.774082   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has current primary IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.774099   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Reserving static IP address...
	I0130 20:38:44.774494   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-877742", mac: "52:54:00:c4:e0:0b", ip: "192.168.72.52"} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.774517   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Reserved static IP address: 192.168.72.52
	I0130 20:38:44.774543   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | skip adding static IP to network mk-default-k8s-diff-port-877742 - found existing host DHCP lease matching {name: "default-k8s-diff-port-877742", mac: "52:54:00:c4:e0:0b", ip: "192.168.72.52"}
	I0130 20:38:44.774561   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for SSH to be available...
	I0130 20:38:44.774589   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Getting to WaitForSSH function...
	I0130 20:38:44.776761   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.777079   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.777114   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.777210   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Using SSH client type: external
	I0130 20:38:44.777242   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa (-rw-------)
	I0130 20:38:44.777299   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:38:44.777332   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | About to run SSH command:
	I0130 20:38:44.777352   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | exit 0
	I0130 20:38:44.875219   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | SSH cmd err, output: <nil>: 
	I0130 20:38:44.875515   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetConfigRaw
	I0130 20:38:44.876243   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:44.878633   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.879035   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.879069   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.879336   45441 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/config.json ...
	I0130 20:38:44.879504   45441 machine.go:88] provisioning docker machine ...
	I0130 20:38:44.879522   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:44.879734   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetMachineName
	I0130 20:38:44.879889   45441 buildroot.go:166] provisioning hostname "default-k8s-diff-port-877742"
	I0130 20:38:44.879932   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetMachineName
	I0130 20:38:44.880102   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:44.882426   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.882753   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.882777   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.882927   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:44.883099   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:44.883246   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:44.883409   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:44.883569   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:44.884066   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:44.884092   45441 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-877742 && echo "default-k8s-diff-port-877742" | sudo tee /etc/hostname
	I0130 20:38:45.030801   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-877742
	
	I0130 20:38:45.030847   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.033532   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.033897   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.033955   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.034094   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.034309   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.034489   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.034644   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.034826   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:45.035168   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:45.035187   45441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-877742' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-877742/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-877742' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:38:45.175807   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:38:45.175849   45441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:38:45.175884   45441 buildroot.go:174] setting up certificates
	I0130 20:38:45.175907   45441 provision.go:83] configureAuth start
	I0130 20:38:45.175923   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetMachineName
	I0130 20:38:45.176200   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:45.179102   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.179489   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.179526   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.179664   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.182178   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.182532   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.182560   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.182666   45441 provision.go:138] copyHostCerts
	I0130 20:38:45.182716   45441 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:38:45.182728   45441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:38:45.182788   45441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:38:45.182895   45441 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:38:45.182910   45441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:38:45.182973   45441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:38:45.183054   45441 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:38:45.183065   45441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:38:45.183090   45441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:38:45.183158   45441 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-877742 san=[192.168.72.52 192.168.72.52 localhost 127.0.0.1 minikube default-k8s-diff-port-877742]
	I0130 20:38:45.352895   45441 provision.go:172] copyRemoteCerts
	I0130 20:38:45.352960   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:38:45.352986   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.355820   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.356141   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.356169   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.356343   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.356540   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.356717   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.356868   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:46.136084   45819 start.go:369] acquired machines lock for "old-k8s-version-150971" in 2m36.388823473s
	I0130 20:38:46.136157   45819 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:38:46.136169   45819 fix.go:54] fixHost starting: 
	I0130 20:38:46.136624   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:46.136669   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:46.153210   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33685
	I0130 20:38:46.153604   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:46.154080   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:38:46.154104   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:46.154422   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:46.154630   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:38:46.154771   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:38:46.156388   45819 fix.go:102] recreateIfNeeded on old-k8s-version-150971: state=Stopped err=<nil>
	I0130 20:38:46.156420   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	W0130 20:38:46.156613   45819 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:38:46.158388   45819 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-150971" ...
	I0130 20:38:45.456511   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:38:45.483324   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0130 20:38:45.510567   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 20:38:45.535387   45441 provision.go:86] duration metric: configureAuth took 359.467243ms
	I0130 20:38:45.535421   45441 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:38:45.535659   45441 config.go:182] Loaded profile config "default-k8s-diff-port-877742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:38:45.535749   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.538712   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.539176   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.539214   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.539334   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.539574   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.539741   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.539995   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.540244   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:45.540770   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:45.540796   45441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:38:45.877778   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:38:45.877813   45441 machine.go:91] provisioned docker machine in 998.294632ms
	I0130 20:38:45.877825   45441 start.go:300] post-start starting for "default-k8s-diff-port-877742" (driver="kvm2")
	I0130 20:38:45.877845   45441 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:38:45.877869   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:45.878190   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:38:45.878224   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.881167   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.881533   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.881566   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.881704   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.881880   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.882064   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.882207   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:45.972932   45441 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:38:45.977412   45441 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:38:45.977437   45441 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:38:45.977514   45441 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:38:45.977593   45441 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:38:45.977694   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:38:45.985843   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:46.008484   45441 start.go:303] post-start completed in 130.643321ms
	I0130 20:38:46.008509   45441 fix.go:56] fixHost completed within 22.404447995s
	I0130 20:38:46.008533   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:46.011463   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.011901   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.011944   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.012088   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:46.012304   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.012500   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.012647   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:46.012803   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:46.013202   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:46.013226   45441 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:38:46.135930   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647126.077813825
	
	I0130 20:38:46.135955   45441 fix.go:206] guest clock: 1706647126.077813825
	I0130 20:38:46.135965   45441 fix.go:219] Guest: 2024-01-30 20:38:46.077813825 +0000 UTC Remote: 2024-01-30 20:38:46.008513384 +0000 UTC m=+250.621109629 (delta=69.300441ms)
	I0130 20:38:46.135988   45441 fix.go:190] guest clock delta is within tolerance: 69.300441ms
	I0130 20:38:46.135993   45441 start.go:83] releasing machines lock for "default-k8s-diff-port-877742", held for 22.53196506s
	I0130 20:38:46.136021   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.136315   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:46.139211   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.139549   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.139581   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.139695   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.140243   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.140427   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.140507   45441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:38:46.140555   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:46.140639   45441 ssh_runner.go:195] Run: cat /version.json
	I0130 20:38:46.140661   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:46.143348   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.143614   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.143651   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.143675   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.143843   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:46.144027   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.144081   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.144110   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.144228   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:46.144253   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:46.144434   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.144434   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:46.144580   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:46.144707   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:46.241499   45441 ssh_runner.go:195] Run: systemctl --version
	I0130 20:38:46.264180   45441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:38:46.417654   45441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:38:46.423377   45441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:38:46.423450   45441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:38:46.439524   45441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:38:46.439549   45441 start.go:475] detecting cgroup driver to use...
	I0130 20:38:46.439612   45441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:38:46.456668   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:38:46.469494   45441 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:38:46.469547   45441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:38:46.482422   45441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:38:46.496031   45441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:38:46.601598   45441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:38:46.710564   45441 docker.go:233] disabling docker service ...
	I0130 20:38:46.710633   45441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:38:46.724084   45441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:38:46.736019   45441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:38:46.853310   45441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:38:46.976197   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:38:46.991033   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:38:47.009961   45441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:38:47.010028   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.019749   45441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:38:47.019822   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.032215   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.043642   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.056005   45441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:38:47.068954   45441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:38:47.079752   45441 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:38:47.079823   45441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:38:47.096106   45441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:38:47.109074   45441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:38:47.243783   45441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:38:47.468971   45441 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:38:47.469055   45441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:38:47.474571   45441 start.go:543] Will wait 60s for crictl version
	I0130 20:38:47.474646   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:38:47.479007   45441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:38:47.525155   45441 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:38:47.525259   45441 ssh_runner.go:195] Run: crio --version
	I0130 20:38:47.582308   45441 ssh_runner.go:195] Run: crio --version
	I0130 20:38:47.648689   45441 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 20:38:44.173930   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:44.197493   45037 api_server.go:72] duration metric: took 2.023971316s to wait for apiserver process to appear ...
	I0130 20:38:44.197522   45037 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:38:44.197545   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:44.198089   45037 api_server.go:269] stopped: https://192.168.61.63:8443/healthz: Get "https://192.168.61.63:8443/healthz": dial tcp 192.168.61.63:8443: connect: connection refused
	I0130 20:38:44.697622   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:48.683401   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:38:48.683435   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:38:48.683452   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:46.159722   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Start
	I0130 20:38:46.159892   45819 main.go:141] libmachine: (old-k8s-version-150971) Ensuring networks are active...
	I0130 20:38:46.160650   45819 main.go:141] libmachine: (old-k8s-version-150971) Ensuring network default is active
	I0130 20:38:46.160960   45819 main.go:141] libmachine: (old-k8s-version-150971) Ensuring network mk-old-k8s-version-150971 is active
	I0130 20:38:46.161374   45819 main.go:141] libmachine: (old-k8s-version-150971) Getting domain xml...
	I0130 20:38:46.162142   45819 main.go:141] libmachine: (old-k8s-version-150971) Creating domain...
	I0130 20:38:47.490526   45819 main.go:141] libmachine: (old-k8s-version-150971) Waiting to get IP...
	I0130 20:38:47.491491   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:47.491971   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:47.492059   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:47.491949   46425 retry.go:31] will retry after 201.906522ms: waiting for machine to come up
	I0130 20:38:47.695709   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:47.696195   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:47.696226   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:47.696146   46425 retry.go:31] will retry after 347.547284ms: waiting for machine to come up
	I0130 20:38:48.045541   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:48.046078   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:48.046102   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:48.046013   46425 retry.go:31] will retry after 373.23424ms: waiting for machine to come up
	I0130 20:38:48.420618   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:48.421238   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:48.421263   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:48.421188   46425 retry.go:31] will retry after 515.166265ms: waiting for machine to come up
	I0130 20:38:48.937713   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:48.942554   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:48.942581   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:48.942448   46425 retry.go:31] will retry after 626.563548ms: waiting for machine to come up
	I0130 20:38:49.570078   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:49.570658   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:49.570689   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:49.570550   46425 retry.go:31] will retry after 618.022034ms: waiting for machine to come up
	I0130 20:38:48.786797   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:38:48.786825   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:38:48.786848   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:48.837579   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:38:48.837608   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:38:49.198568   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:49.206091   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:38:49.206135   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:38:49.697669   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:49.707878   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:38:49.707912   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:38:50.198039   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:50.209003   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 200:
	ok
	I0130 20:38:50.228887   45037 api_server.go:141] control plane version: v1.28.4
	I0130 20:38:50.228967   45037 api_server.go:131] duration metric: took 6.031436808s to wait for apiserver health ...
	I0130 20:38:50.228981   45037 cni.go:84] Creating CNI manager for ""
	I0130 20:38:50.228991   45037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:38:50.230543   45037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:38:47.649943   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:47.653185   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:47.653623   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:47.653664   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:47.653933   45441 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0130 20:38:47.659385   45441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:47.675851   45441 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 20:38:47.675918   45441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:47.724799   45441 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 20:38:47.724883   45441 ssh_runner.go:195] Run: which lz4
	I0130 20:38:47.729563   45441 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 20:38:47.735015   45441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:38:47.735048   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 20:38:49.612191   45441 crio.go:444] Took 1.882668 seconds to copy over tarball
	I0130 20:38:49.612263   45441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 20:38:50.231895   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:38:50.262363   45037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:38:50.290525   45037 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:38:50.307654   45037 system_pods.go:59] 8 kube-system pods found
	I0130 20:38:50.307696   45037 system_pods.go:61] "coredns-5dd5756b68-jqzzv" [59f362b6-606e-4bcd-b5eb-c8822aaf8b9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:38:50.307708   45037 system_pods.go:61] "etcd-embed-certs-208583" [798094bf-2aac-4f39-afc1-4f873bdd08ee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 20:38:50.307721   45037 system_pods.go:61] "kube-apiserver-embed-certs-208583" [b96b9f6e-b36a-47bf-8f6d-01f883501766] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 20:38:50.307736   45037 system_pods.go:61] "kube-controller-manager-embed-certs-208583" [3dbd9e29-5c95-40f5-acd8-9767f6ce7a03] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 20:38:50.307751   45037 system_pods.go:61] "kube-proxy-g7q5t" [47f109e0-7a56-472f-8c7e-ba2b138de352] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 20:38:50.307760   45037 system_pods.go:61] "kube-scheduler-embed-certs-208583" [e8a37eb1-599f-478f-bbc1-b44b1020f291] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 20:38:50.307769   45037 system_pods.go:61] "metrics-server-57f55c9bc5-ghg9n" [37700115-83e9-440a-b396-56f50adb6311] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:38:50.307788   45037 system_pods.go:61] "storage-provisioner" [15108916-a630-4208-99f7-5706db407b22] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:38:50.307810   45037 system_pods.go:74] duration metric: took 17.261001ms to wait for pod list to return data ...
	I0130 20:38:50.307820   45037 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:38:50.317889   45037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:38:50.317926   45037 node_conditions.go:123] node cpu capacity is 2
	I0130 20:38:50.317939   45037 node_conditions.go:105] duration metric: took 10.11037ms to run NodePressure ...
	I0130 20:38:50.317960   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:50.681835   45037 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:38:50.688460   45037 kubeadm.go:787] kubelet initialised
	I0130 20:38:50.688488   45037 kubeadm.go:788] duration metric: took 6.61921ms waiting for restarted kubelet to initialise ...
	I0130 20:38:50.688498   45037 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:38:50.696051   45037 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.703680   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.703713   45037 pod_ready.go:81] duration metric: took 7.634057ms waiting for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.703724   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.703739   45037 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.710192   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "etcd-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.710216   45037 pod_ready.go:81] duration metric: took 6.467699ms waiting for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.710227   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "etcd-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.710235   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.720866   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.720894   45037 pod_ready.go:81] duration metric: took 10.648867ms waiting for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.720906   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.720914   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.731095   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.731162   45037 pod_ready.go:81] duration metric: took 10.237453ms waiting for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.731181   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.731190   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:51.097357   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-proxy-g7q5t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.097391   45037 pod_ready.go:81] duration metric: took 366.190232ms waiting for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:51.097404   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-proxy-g7q5t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.097413   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:51.499223   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.499261   45037 pod_ready.go:81] duration metric: took 401.839475ms waiting for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:51.499293   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.499303   45037 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:51.895725   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.895779   45037 pod_ready.go:81] duration metric: took 396.460908ms waiting for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:51.895798   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.895811   45037 pod_ready.go:38] duration metric: took 1.207302604s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:38:51.895836   45037 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:38:51.909431   45037 ops.go:34] apiserver oom_adj: -16
	I0130 20:38:51.909454   45037 kubeadm.go:640] restartCluster took 21.337960534s
	I0130 20:38:51.909472   45037 kubeadm.go:406] StartCluster complete in 21.386877314s
	I0130 20:38:51.909491   45037 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:51.909571   45037 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:38:51.911558   45037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:51.911793   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:38:51.911888   45037 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:38:51.911974   45037 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-208583"
	I0130 20:38:51.911995   45037 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-208583"
	W0130 20:38:51.912007   45037 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:38:51.912044   45037 config.go:182] Loaded profile config "embed-certs-208583": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:38:51.912101   45037 host.go:66] Checking if "embed-certs-208583" exists ...
	I0130 20:38:51.912138   45037 addons.go:69] Setting default-storageclass=true in profile "embed-certs-208583"
	I0130 20:38:51.912168   45037 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-208583"
	I0130 20:38:51.912131   45037 addons.go:69] Setting metrics-server=true in profile "embed-certs-208583"
	I0130 20:38:51.912238   45037 addons.go:234] Setting addon metrics-server=true in "embed-certs-208583"
	W0130 20:38:51.912250   45037 addons.go:243] addon metrics-server should already be in state true
	I0130 20:38:51.912328   45037 host.go:66] Checking if "embed-certs-208583" exists ...
	I0130 20:38:51.912537   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.912561   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.912583   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.912603   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.912686   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.912711   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.923647   45037 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-208583" context rescaled to 1 replicas
	I0130 20:38:51.923691   45037 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.63 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:38:51.926120   45037 out.go:177] * Verifying Kubernetes components...
	I0130 20:38:51.929413   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:38:51.930498   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I0130 20:38:51.930911   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0130 20:38:51.931075   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.931580   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.931988   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.932001   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.932296   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.932730   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.932756   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.933221   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.933273   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.933917   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.934492   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.934524   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.936079   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42667
	I0130 20:38:51.936488   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.937121   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.937144   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.937525   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.937703   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.941576   45037 addons.go:234] Setting addon default-storageclass=true in "embed-certs-208583"
	W0130 20:38:51.941597   45037 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:38:51.941623   45037 host.go:66] Checking if "embed-certs-208583" exists ...
	I0130 20:38:51.942033   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.942072   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.953268   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44577
	I0130 20:38:51.953715   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43785
	I0130 20:38:51.953863   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.954633   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.954659   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.954742   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.955212   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.955233   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.955318   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.955530   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.955663   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.955853   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.957839   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:51.958080   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:51.960896   45037 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:38:51.961493   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37549
	I0130 20:38:51.962677   45037 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:38:51.962838   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:38:51.964463   45037 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:38:51.964487   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:38:51.964518   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:51.964486   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:38:51.964554   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:51.963107   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.965261   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.965274   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.965656   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.966482   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.966520   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.968651   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.969034   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:51.969062   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.969307   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:51.969493   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:51.969580   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.969656   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:51.969809   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:51.970328   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:51.970372   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:51.970391   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.970521   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:51.970706   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:51.970866   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:51.985009   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33297
	I0130 20:38:51.985512   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.986096   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.986119   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.986558   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.986778   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.988698   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:51.991566   45037 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:38:51.991620   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:38:51.991647   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:51.994416   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.995367   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:51.995370   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:51.995439   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.995585   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:51.995740   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:51.995885   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:52.125074   45037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:38:52.140756   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:38:52.140787   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:38:52.180728   45037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:38:52.195559   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:38:52.195587   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:38:52.235770   45037 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0130 20:38:52.235779   45037 node_ready.go:35] waiting up to 6m0s for node "embed-certs-208583" to be "Ready" ...
	I0130 20:38:52.243414   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:38:52.243444   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:38:52.349604   45037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:38:54.111857   45037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.931041237s)
	I0130 20:38:54.111916   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.111938   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112013   45037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.986903299s)
	I0130 20:38:54.112051   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.112065   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112337   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112383   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112398   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.112403   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112411   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.112421   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.112426   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112434   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.112423   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112450   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112653   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112728   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112748   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.112770   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112797   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112806   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.119872   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.119893   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.120118   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.120138   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.121373   45037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.771724991s)
	I0130 20:38:54.121408   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.121421   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.121619   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.121636   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.121647   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.121655   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.121837   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.121853   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.121875   45037 addons.go:470] Verifying addon metrics-server=true in "embed-certs-208583"
	I0130 20:38:54.332655   45037 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 20:38:50.189837   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:50.190326   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:50.190352   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:50.190273   46425 retry.go:31] will retry after 843.505616ms: waiting for machine to come up
	I0130 20:38:51.035080   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:51.035482   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:51.035511   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:51.035454   46425 retry.go:31] will retry after 1.230675294s: waiting for machine to come up
	I0130 20:38:52.267754   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:52.268342   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:52.268365   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:52.268298   46425 retry.go:31] will retry after 1.516187998s: waiting for machine to come up
	I0130 20:38:53.785734   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:53.786142   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:53.786173   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:53.786084   46425 retry.go:31] will retry after 2.020274977s: waiting for machine to come up
	I0130 20:38:53.002777   45441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.390479779s)
	I0130 20:38:53.002812   45441 crio.go:451] Took 3.390595 seconds to extract the tarball
	I0130 20:38:53.002824   45441 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 20:38:53.059131   45441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:53.121737   45441 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 20:38:53.121765   45441 cache_images.go:84] Images are preloaded, skipping loading
	I0130 20:38:53.121837   45441 ssh_runner.go:195] Run: crio config
	I0130 20:38:53.187904   45441 cni.go:84] Creating CNI manager for ""
	I0130 20:38:53.187931   45441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:38:53.187953   45441 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:38:53.187982   45441 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.52 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-877742 NodeName:default-k8s-diff-port-877742 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:38:53.188157   45441 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.52
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-877742"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.52
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.52"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:38:53.188253   45441 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-877742 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-877742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0130 20:38:53.188320   45441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 20:38:53.200851   45441 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:38:53.200938   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:38:53.212897   45441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0130 20:38:53.231805   45441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:38:53.253428   45441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0130 20:38:53.274041   45441 ssh_runner.go:195] Run: grep 192.168.72.52	control-plane.minikube.internal$ /etc/hosts
	I0130 20:38:53.278499   45441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.52	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:53.295089   45441 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742 for IP: 192.168.72.52
	I0130 20:38:53.295126   45441 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:53.295326   45441 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:38:53.295393   45441 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:38:53.295497   45441 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.key
	I0130 20:38:53.295581   45441 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/apiserver.key.02e1fdc8
	I0130 20:38:53.295637   45441 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/proxy-client.key
	I0130 20:38:53.295774   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:38:53.295813   45441 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:38:53.295827   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:38:53.295864   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:38:53.295912   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:38:53.295948   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:38:53.296012   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:53.296828   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:38:53.326150   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 20:38:53.356286   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:38:53.384496   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 20:38:53.414403   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:38:53.440628   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:38:53.465452   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:38:53.494321   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:38:53.520528   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:38:53.543933   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:38:53.569293   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:38:53.594995   45441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:38:53.615006   45441 ssh_runner.go:195] Run: openssl version
	I0130 20:38:53.622442   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:38:53.636482   45441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:38:53.642501   45441 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:38:53.642572   45441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:38:53.649251   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:38:53.661157   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:38:53.673453   45441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:53.678369   45441 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:53.678439   45441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:53.684812   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:38:53.696906   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:38:53.710065   45441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:38:53.714715   45441 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:38:53.714776   45441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:38:53.720458   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:38:53.733050   45441 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:38:53.737894   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:38:53.744337   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:38:53.750326   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:38:53.756139   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:38:53.761883   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:38:53.767633   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:38:53.773367   45441 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-877742 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-877742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.52 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:38:53.773480   45441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:38:53.773551   45441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:38:53.815095   45441 cri.go:89] found id: ""
	I0130 20:38:53.815159   45441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:38:53.826497   45441 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:38:53.826521   45441 kubeadm.go:636] restartCluster start
	I0130 20:38:53.826570   45441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:38:53.837155   45441 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:53.838622   45441 kubeconfig.go:92] found "default-k8s-diff-port-877742" server: "https://192.168.72.52:8444"
	I0130 20:38:53.841776   45441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:38:53.852124   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:53.852191   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:53.864432   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:54.353064   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:54.353141   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:54.365422   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:54.853083   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:54.853170   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:54.869932   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:55.352281   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:55.352360   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:55.369187   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:54.550999   45037 addons.go:505] enable addons completed in 2.639107358s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 20:38:54.692017   45037 node_ready.go:58] node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:56.740251   45037 node_ready.go:58] node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:55.809310   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:55.809708   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:55.809741   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:55.809655   46425 retry.go:31] will retry after 1.997080797s: waiting for machine to come up
	I0130 20:38:57.808397   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:57.808798   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:57.808829   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:57.808744   46425 retry.go:31] will retry after 3.605884761s: waiting for machine to come up
	I0130 20:38:55.852241   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:55.852356   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:55.864923   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:56.352455   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:56.352559   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:56.368458   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:56.853090   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:56.853175   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:56.869148   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:57.352965   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:57.353055   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:57.370697   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:57.852261   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:57.852391   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:57.868729   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:58.352147   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:58.352250   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:58.368543   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:58.852300   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:58.852373   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:58.868594   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:59.353039   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:59.353110   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:59.365593   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:59.852147   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:59.852276   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:59.865561   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:00.353077   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:00.353186   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:00.370006   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:59.242842   45037 node_ready.go:58] node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:59.739830   45037 node_ready.go:49] node "embed-certs-208583" has status "Ready":"True"
	I0130 20:38:59.739851   45037 node_ready.go:38] duration metric: took 7.503983369s waiting for node "embed-certs-208583" to be "Ready" ...
	I0130 20:38:59.739859   45037 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:38:59.746243   45037 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.751722   45037 pod_ready.go:92] pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace has status "Ready":"True"
	I0130 20:38:59.751745   45037 pod_ready.go:81] duration metric: took 5.480115ms waiting for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.751752   45037 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.757152   45037 pod_ready.go:92] pod "etcd-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:38:59.757175   45037 pod_ready.go:81] duration metric: took 5.417291ms waiting for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.757184   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.762156   45037 pod_ready.go:92] pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:38:59.762231   45037 pod_ready.go:81] duration metric: took 4.985076ms waiting for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.762267   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:01.773853   45037 pod_ready.go:102] pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:01.415831   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:01.416304   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:39:01.416345   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:39:01.416273   46425 retry.go:31] will retry after 3.558433109s: waiting for machine to come up
	I0130 20:39:00.852444   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:00.852545   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:00.865338   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:01.353002   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:01.353101   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:01.366419   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:01.853034   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:01.853114   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:01.866142   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:02.352652   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:02.352752   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:02.364832   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:02.852325   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:02.852406   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:02.864013   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:03.352408   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:03.352518   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:03.363939   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:03.853126   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:03.853200   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:03.865047   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:03.865084   45441 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:39:03.865094   45441 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:39:03.865105   45441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:39:03.865154   45441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:03.904863   45441 cri.go:89] found id: ""
	I0130 20:39:03.904932   45441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:39:03.922225   45441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:39:03.931861   45441 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:39:03.931915   45441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:03.941185   45441 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:03.941205   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.064230   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.627940   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.816900   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.893059   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.986288   45441 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:39:04.986362   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:06.448368   44923 start.go:369] acquired machines lock for "no-preload-473743" in 58.134425603s
	I0130 20:39:06.448435   44923 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:39:06.448443   44923 fix.go:54] fixHost starting: 
	I0130 20:39:06.448866   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:39:06.448900   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:39:06.468570   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43389
	I0130 20:39:06.468965   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:39:06.469552   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:39:06.469587   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:39:06.469950   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:39:06.470153   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:06.470312   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:39:06.472312   44923 fix.go:102] recreateIfNeeded on no-preload-473743: state=Stopped err=<nil>
	I0130 20:39:06.472337   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	W0130 20:39:06.472495   44923 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:39:06.474460   44923 out.go:177] * Restarting existing kvm2 VM for "no-preload-473743" ...
	I0130 20:39:04.976314   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.976787   45819 main.go:141] libmachine: (old-k8s-version-150971) Found IP for machine: 192.168.39.16
	I0130 20:39:04.976820   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has current primary IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.976830   45819 main.go:141] libmachine: (old-k8s-version-150971) Reserving static IP address...
	I0130 20:39:04.977271   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "old-k8s-version-150971", mac: "52:54:00:6e:fe:f8", ip: "192.168.39.16"} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:04.977300   45819 main.go:141] libmachine: (old-k8s-version-150971) Reserved static IP address: 192.168.39.16
	I0130 20:39:04.977325   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | skip adding static IP to network mk-old-k8s-version-150971 - found existing host DHCP lease matching {name: "old-k8s-version-150971", mac: "52:54:00:6e:fe:f8", ip: "192.168.39.16"}
	I0130 20:39:04.977345   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Getting to WaitForSSH function...
	I0130 20:39:04.977361   45819 main.go:141] libmachine: (old-k8s-version-150971) Waiting for SSH to be available...
	I0130 20:39:04.979621   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.980015   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:04.980042   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.980138   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Using SSH client type: external
	I0130 20:39:04.980164   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa (-rw-------)
	I0130 20:39:04.980206   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:39:04.980221   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | About to run SSH command:
	I0130 20:39:04.980259   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | exit 0
	I0130 20:39:05.079758   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | SSH cmd err, output: <nil>: 
	I0130 20:39:05.080092   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetConfigRaw
	I0130 20:39:05.080846   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:05.083636   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.084019   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.084062   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.084354   45819 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/config.json ...
	I0130 20:39:05.084608   45819 machine.go:88] provisioning docker machine ...
	I0130 20:39:05.084635   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:05.084845   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetMachineName
	I0130 20:39:05.085031   45819 buildroot.go:166] provisioning hostname "old-k8s-version-150971"
	I0130 20:39:05.085063   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetMachineName
	I0130 20:39:05.085221   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.087561   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.087930   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.087963   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.088067   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.088220   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.088384   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.088550   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.088736   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:05.089124   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:05.089141   45819 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-150971 && echo "old-k8s-version-150971" | sudo tee /etc/hostname
	I0130 20:39:05.232496   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-150971
	
	I0130 20:39:05.232528   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.234898   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.235190   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.235227   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.235310   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.235515   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.235655   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.235791   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.235921   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:05.236233   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:05.236251   45819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-150971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-150971/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-150971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:39:05.370716   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:39:05.370753   45819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:39:05.370774   45819 buildroot.go:174] setting up certificates
	I0130 20:39:05.370787   45819 provision.go:83] configureAuth start
	I0130 20:39:05.370800   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetMachineName
	I0130 20:39:05.371158   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:05.373602   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.373946   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.373970   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.374153   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.376230   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.376617   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.376657   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.376763   45819 provision.go:138] copyHostCerts
	I0130 20:39:05.376816   45819 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:39:05.376826   45819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:39:05.376892   45819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:39:05.377066   45819 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:39:05.377079   45819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:39:05.377113   45819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:39:05.377205   45819 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:39:05.377216   45819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:39:05.377243   45819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:39:05.377336   45819 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-150971 san=[192.168.39.16 192.168.39.16 localhost 127.0.0.1 minikube old-k8s-version-150971]
	I0130 20:39:05.649128   45819 provision.go:172] copyRemoteCerts
	I0130 20:39:05.649183   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:39:05.649206   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.652019   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.652353   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.652385   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.652657   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.652857   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.653048   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.653207   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:05.753981   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0130 20:39:05.782847   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 20:39:05.810083   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:39:05.836967   45819 provision.go:86] duration metric: configureAuth took 466.16712ms
	I0130 20:39:05.836989   45819 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:39:05.837156   45819 config.go:182] Loaded profile config "old-k8s-version-150971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 20:39:05.837222   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.840038   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.840384   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.840422   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.840597   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.840832   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.841019   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.841167   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.841338   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:05.841681   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:05.841700   45819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:39:06.170121   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:39:06.170151   45819 machine.go:91] provisioned docker machine in 1.08552444s
	I0130 20:39:06.170163   45819 start.go:300] post-start starting for "old-k8s-version-150971" (driver="kvm2")
	I0130 20:39:06.170183   45819 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:39:06.170202   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.170544   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:39:06.170583   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.173794   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.174165   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.174192   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.174421   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.174620   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.174804   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.174947   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:06.273272   45819 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:39:06.277900   45819 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:39:06.277928   45819 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:39:06.278010   45819 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:39:06.278099   45819 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:39:06.278207   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:39:06.286905   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:06.311772   45819 start.go:303] post-start completed in 141.592454ms
	I0130 20:39:06.311808   45819 fix.go:56] fixHost completed within 20.175639407s
	I0130 20:39:06.311832   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.314627   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.314998   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.315027   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.315179   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.315402   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.315585   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.315758   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.315936   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:06.316254   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:06.316269   45819 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:39:06.448193   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647146.389757507
	
	I0130 20:39:06.448219   45819 fix.go:206] guest clock: 1706647146.389757507
	I0130 20:39:06.448230   45819 fix.go:219] Guest: 2024-01-30 20:39:06.389757507 +0000 UTC Remote: 2024-01-30 20:39:06.311812895 +0000 UTC m=+176.717060563 (delta=77.944612ms)
	I0130 20:39:06.448277   45819 fix.go:190] guest clock delta is within tolerance: 77.944612ms
	I0130 20:39:06.448285   45819 start.go:83] releasing machines lock for "old-k8s-version-150971", held for 20.312150878s
	I0130 20:39:06.448318   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.448584   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:06.451978   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.452448   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.452475   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.452632   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.453188   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.453364   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.453450   45819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:39:06.453501   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.453604   45819 ssh_runner.go:195] Run: cat /version.json
	I0130 20:39:06.453622   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.456426   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.456694   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.456722   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.456743   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.457015   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.457218   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.457228   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.457266   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.457473   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.457483   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.457648   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.457657   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:06.457834   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.457945   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:06.575025   45819 ssh_runner.go:195] Run: systemctl --version
	I0130 20:39:06.580884   45819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:39:06.730119   45819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:39:06.737872   45819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:39:06.737945   45819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:39:06.752952   45819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:39:06.752987   45819 start.go:475] detecting cgroup driver to use...
	I0130 20:39:06.753062   45819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:39:06.772925   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:39:06.787880   45819 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:39:06.787957   45819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:39:06.805662   45819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:39:06.820819   45819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:39:06.941809   45819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:39:07.067216   45819 docker.go:233] disabling docker service ...
	I0130 20:39:07.067299   45819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:39:07.084390   45819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:39:07.099373   45819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:39:07.242239   45819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:39:07.378297   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:39:07.390947   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:39:07.414177   45819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0130 20:39:07.414256   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.427074   45819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:39:07.427154   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.439058   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.451547   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.462473   45819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:39:07.474082   45819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:39:07.484883   45819 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:39:07.484943   45819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:39:07.502181   45819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:39:07.511315   45819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:39:07.677114   45819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:39:07.878176   45819 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:39:07.878247   45819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:39:07.885855   45819 start.go:543] Will wait 60s for crictl version
	I0130 20:39:07.885918   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:07.895480   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:39:07.946256   45819 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:39:07.946344   45819 ssh_runner.go:195] Run: crio --version
	I0130 20:39:07.999647   45819 ssh_runner.go:195] Run: crio --version
	I0130 20:39:08.064335   45819 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0130 20:39:04.270868   45037 pod_ready.go:92] pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:04.270900   45037 pod_ready.go:81] duration metric: took 4.508624463s waiting for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.270911   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.276806   45037 pod_ready.go:92] pod "kube-proxy-g7q5t" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:04.276830   45037 pod_ready.go:81] duration metric: took 5.914142ms waiting for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.276839   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.283207   45037 pod_ready.go:92] pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:04.283225   45037 pod_ready.go:81] duration metric: took 6.380407ms waiting for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.283235   45037 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:06.291591   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:08.318273   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:08.065754   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:08.068986   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:08.069433   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:08.069477   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:08.069665   45819 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 20:39:08.074101   45819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:08.088404   45819 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 20:39:08.088468   45819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:39:08.133749   45819 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0130 20:39:08.133831   45819 ssh_runner.go:195] Run: which lz4
	I0130 20:39:08.138114   45819 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 20:39:08.142668   45819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:39:08.142709   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0130 20:39:05.487066   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:05.987386   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:06.486465   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:06.987491   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:07.486540   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:07.518826   45441 api_server.go:72] duration metric: took 2.532536561s to wait for apiserver process to appear ...
	I0130 20:39:07.518852   45441 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:39:07.518875   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:06.475902   44923 main.go:141] libmachine: (no-preload-473743) Calling .Start
	I0130 20:39:06.476095   44923 main.go:141] libmachine: (no-preload-473743) Ensuring networks are active...
	I0130 20:39:06.476929   44923 main.go:141] libmachine: (no-preload-473743) Ensuring network default is active
	I0130 20:39:06.477344   44923 main.go:141] libmachine: (no-preload-473743) Ensuring network mk-no-preload-473743 is active
	I0130 20:39:06.477817   44923 main.go:141] libmachine: (no-preload-473743) Getting domain xml...
	I0130 20:39:06.478643   44923 main.go:141] libmachine: (no-preload-473743) Creating domain...
	I0130 20:39:07.834909   44923 main.go:141] libmachine: (no-preload-473743) Waiting to get IP...
	I0130 20:39:07.835906   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:07.836320   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:07.836371   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:07.836287   46613 retry.go:31] will retry after 205.559104ms: waiting for machine to come up
	I0130 20:39:08.043926   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:08.044522   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:08.044607   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:08.044570   46613 retry.go:31] will retry after 291.055623ms: waiting for machine to come up
	I0130 20:39:08.337157   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:08.337756   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:08.337859   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:08.337823   46613 retry.go:31] will retry after 462.903788ms: waiting for machine to come up
	I0130 20:39:08.802588   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:08.803397   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:08.803497   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:08.803459   46613 retry.go:31] will retry after 497.808285ms: waiting for machine to come up
	I0130 20:39:09.303349   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:09.304015   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:09.304037   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:09.303936   46613 retry.go:31] will retry after 569.824748ms: waiting for machine to come up
	I0130 20:39:09.875816   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:09.876316   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:09.876348   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:09.876259   46613 retry.go:31] will retry after 589.654517ms: waiting for machine to come up
	I0130 20:39:10.467029   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:10.467568   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:10.467601   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:10.467520   46613 retry.go:31] will retry after 857.069247ms: waiting for machine to come up
	I0130 20:39:10.796542   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:13.290072   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:09.980254   45819 crio.go:444] Took 1.842164 seconds to copy over tarball
	I0130 20:39:09.980328   45819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 20:39:13.116148   45819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.135783447s)
	I0130 20:39:13.116184   45819 crio.go:451] Took 3.135904 seconds to extract the tarball
	I0130 20:39:13.116196   45819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 20:39:13.161285   45819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:39:13.226970   45819 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0130 20:39:13.227008   45819 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 20:39:13.227096   45819 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.227151   45819 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.227153   45819 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.227173   45819 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.227121   45819 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:13.227155   45819 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0130 20:39:13.227439   45819 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.227117   45819 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.229003   45819 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.229038   45819 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:13.229065   45819 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.229112   45819 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.229011   45819 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0130 20:39:13.229170   45819 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.229177   45819 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.229217   45819 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.443441   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.484878   45819 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0130 20:39:13.484941   45819 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.485021   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.489291   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.526847   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.526966   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.527312   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0130 20:39:13.528949   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.532002   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0130 20:39:13.532309   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.532701   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.662312   45819 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0130 20:39:13.662355   45819 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.662422   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.669155   45819 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0130 20:39:13.669201   45819 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.669265   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708339   45819 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0130 20:39:13.708373   45819 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0130 20:39:13.708398   45819 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0130 20:39:13.708404   45819 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.708435   45819 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0130 20:39:13.708470   45819 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.708476   45819 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0130 20:39:13.708491   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.708507   45819 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.708508   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708451   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708443   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708565   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.708549   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.767721   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.767762   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.767789   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0130 20:39:13.767835   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0130 20:39:13.767869   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0130 20:39:13.767935   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.816661   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0130 20:39:13.863740   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0130 20:39:13.863751   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0130 20:39:13.863798   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0130 20:39:14.096216   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:14.241457   45819 cache_images.go:92] LoadImages completed in 1.014424533s
	W0130 20:39:14.241562   45819 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0130 20:39:14.241641   45819 ssh_runner.go:195] Run: crio config
	I0130 20:39:14.307624   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:39:14.307644   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:14.307673   45819 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:39:14.307696   45819 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-150971 NodeName:old-k8s-version-150971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0130 20:39:14.307866   45819 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-150971"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-150971
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.16:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:39:14.307973   45819 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-150971 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:39:14.308042   45819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0130 20:39:14.318757   45819 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:39:14.318830   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:39:14.329640   45819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0130 20:39:14.347498   45819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:39:14.365403   45819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0130 20:39:14.383846   45819 ssh_runner.go:195] Run: grep 192.168.39.16	control-plane.minikube.internal$ /etc/hosts
	I0130 20:39:14.388138   45819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:14.402420   45819 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971 for IP: 192.168.39.16
	I0130 20:39:14.402483   45819 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:39:14.402661   45819 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:39:14.402707   45819 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:39:14.402780   45819 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.key
	I0130 20:39:14.402837   45819 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/apiserver.key.5918fcb3
	I0130 20:39:14.402877   45819 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/proxy-client.key
	I0130 20:39:14.403025   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:39:14.403076   45819 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:39:14.403094   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:39:14.403131   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:39:14.403171   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:39:14.403206   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:39:14.403290   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:14.404157   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:39:14.430902   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 20:39:14.454554   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:39:14.482335   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 20:39:14.505963   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:39:14.532616   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:39:14.558930   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:39:14.585784   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:39:14.609214   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:39:14.635743   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:39:12.268901   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:12.268934   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:12.268948   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:12.307051   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:12.307093   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:12.519619   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:12.530857   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:12.530904   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:13.019370   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:13.024544   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:13.024577   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:13.519023   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:13.525748   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:13.525784   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:14.019318   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:14.026570   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:14.026600   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:14.519000   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:15.074306   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:15.074341   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:15.074353   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:15.081035   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:15.081075   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:11.325993   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:11.326475   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:11.326506   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:11.326439   46613 retry.go:31] will retry after 994.416536ms: waiting for machine to come up
	I0130 20:39:12.323190   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:12.323897   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:12.323924   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:12.323807   46613 retry.go:31] will retry after 1.746704262s: waiting for machine to come up
	I0130 20:39:14.072583   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:14.073100   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:14.073158   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:14.073072   46613 retry.go:31] will retry after 2.322781715s: waiting for machine to come up
	I0130 20:39:15.519005   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:15.609496   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:15.609529   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:16.018990   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:16.024390   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 200:
	ok
	I0130 20:39:16.037151   45441 api_server.go:141] control plane version: v1.28.4
	I0130 20:39:16.037191   45441 api_server.go:131] duration metric: took 8.518327222s to wait for apiserver health ...
	I0130 20:39:16.037203   45441 cni.go:84] Creating CNI manager for ""
	I0130 20:39:16.037211   45441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:16.039114   45441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:39:15.290788   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:17.292552   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:14.662372   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:39:14.814291   45819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:39:14.832453   45819 ssh_runner.go:195] Run: openssl version
	I0130 20:39:14.838238   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:39:14.848628   45819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:39:14.853713   45819 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:39:14.853761   45819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:39:14.859768   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:39:14.870658   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:39:14.881444   45819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:14.886241   45819 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:14.886290   45819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:14.892197   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:39:14.903459   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:39:14.914463   45819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:39:14.919337   45819 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:39:14.919413   45819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:39:14.925258   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:39:14.935893   45819 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:39:14.941741   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:39:14.948871   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:39:14.955038   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:39:14.961605   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:39:14.967425   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:39:14.973845   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:39:14.980072   45819 kubeadm.go:404] StartCluster: {Name:old-k8s-version-150971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:39:14.980218   45819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:39:14.980265   45819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:15.021821   45819 cri.go:89] found id: ""
	I0130 20:39:15.021920   45819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:39:15.033604   45819 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:39:15.033629   45819 kubeadm.go:636] restartCluster start
	I0130 20:39:15.033686   45819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:39:15.044324   45819 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:15.045356   45819 kubeconfig.go:92] found "old-k8s-version-150971" server: "https://192.168.39.16:8443"
	I0130 20:39:15.047610   45819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:39:15.057690   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:15.057746   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:15.073207   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:15.558392   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:15.558480   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:15.574711   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:16.057794   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:16.057971   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:16.073882   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:16.557808   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:16.557879   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:16.571659   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:17.057817   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:17.057922   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:17.074250   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:17.557727   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:17.557809   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:17.573920   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:18.058504   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:18.058573   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:18.070636   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:18.558163   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:18.558262   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:18.570781   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:19.058321   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:19.058414   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:19.074887   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:19.558503   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:19.558596   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:19.570666   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:16.040606   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:39:16.065469   45441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:39:16.099751   45441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:39:16.113444   45441 system_pods.go:59] 8 kube-system pods found
	I0130 20:39:16.113486   45441 system_pods.go:61] "coredns-5dd5756b68-2955f" [abae9f5c-ed48-494b-b014-a28f6290d772] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:39:16.113498   45441 system_pods.go:61] "etcd-default-k8s-diff-port-877742" [0f69a8d9-5549-4f3a-8b12-ee9e96e08271] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 20:39:16.113509   45441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-877742" [ab6cf2c3-cc75-44b8-b039-6e21881a9ade] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 20:39:16.113519   45441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-877742" [4b313734-cd1e-4229-afcd-4d0b517594ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 20:39:16.113533   45441 system_pods.go:61] "kube-proxy-s9ssn" [ea1c69e6-d319-41ee-a47f-4937f03ecdc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 20:39:16.113549   45441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-877742" [3f4d9e5f-1421-4576-839b-3bdfba56700b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 20:39:16.113566   45441 system_pods.go:61] "metrics-server-57f55c9bc5-hzfwg" [1e06ac92-f7ff-418a-9a8d-72d763566bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:39:16.113582   45441 system_pods.go:61] "storage-provisioner" [4cf793ab-e7a5-4a51-bcb9-a07bea323a44] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:39:16.113599   45441 system_pods.go:74] duration metric: took 13.827445ms to wait for pod list to return data ...
	I0130 20:39:16.113608   45441 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:39:16.121786   45441 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:39:16.121882   45441 node_conditions.go:123] node cpu capacity is 2
	I0130 20:39:16.121904   45441 node_conditions.go:105] duration metric: took 8.289345ms to run NodePressure ...
	I0130 20:39:16.121929   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:16.440112   45441 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:39:16.447160   45441 kubeadm.go:787] kubelet initialised
	I0130 20:39:16.447188   45441 kubeadm.go:788] duration metric: took 7.04624ms waiting for restarted kubelet to initialise ...
	I0130 20:39:16.447198   45441 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:39:16.457164   45441 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-2955f" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:16.463990   45441 pod_ready.go:97] node "default-k8s-diff-port-877742" hosting pod "coredns-5dd5756b68-2955f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.464020   45441 pod_ready.go:81] duration metric: took 6.825543ms waiting for pod "coredns-5dd5756b68-2955f" in "kube-system" namespace to be "Ready" ...
	E0130 20:39:16.464033   45441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-877742" hosting pod "coredns-5dd5756b68-2955f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.464044   45441 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:16.476983   45441 pod_ready.go:97] node "default-k8s-diff-port-877742" hosting pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.477077   45441 pod_ready.go:81] duration metric: took 12.988392ms waiting for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	E0130 20:39:16.477109   45441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-877742" hosting pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.477128   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:18.486083   45441 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:16.397588   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:16.398050   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:16.398082   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:16.397988   46613 retry.go:31] will retry after 2.411227582s: waiting for machine to come up
	I0130 20:39:18.810874   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:18.811404   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:18.811439   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:18.811358   46613 retry.go:31] will retry after 2.231016506s: waiting for machine to come up
	I0130 20:39:19.296383   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:21.790307   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:20.058718   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:20.058800   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:20.074443   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:20.558683   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:20.558756   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:20.574765   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:21.058367   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:21.058456   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:21.074652   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:21.558528   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:21.558648   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:21.573650   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:22.058161   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:22.058280   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:22.070780   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:22.558448   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:22.558525   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:22.572220   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:23.057797   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:23.057884   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:23.071260   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:23.558193   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:23.558278   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:23.571801   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:24.058483   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:24.058603   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:24.070898   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:24.558465   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:24.558546   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:24.573966   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:21.008056   45441 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:23.484244   45441 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:23.987592   45441 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:23.987615   45441 pod_ready.go:81] duration metric: took 7.510477497s waiting for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.987624   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.993335   45441 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:23.993358   45441 pod_ready.go:81] duration metric: took 5.726687ms waiting for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.993373   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s9ssn" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.998021   45441 pod_ready.go:92] pod "kube-proxy-s9ssn" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:23.998045   45441 pod_ready.go:81] duration metric: took 4.664039ms waiting for pod "kube-proxy-s9ssn" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.998057   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:21.044853   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:21.045392   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:21.045423   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:21.045336   46613 retry.go:31] will retry after 3.525646558s: waiting for machine to come up
	I0130 20:39:24.573139   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:24.573573   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:24.573596   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:24.573532   46613 retry.go:31] will retry after 4.365207536s: waiting for machine to come up
	I0130 20:39:23.790893   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:25.791630   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:28.291352   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:25.058653   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:25.058753   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:25.072061   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:25.072091   45819 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:39:25.072115   45819 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:39:25.072127   45819 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:39:25.072183   45819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:25.121788   45819 cri.go:89] found id: ""
	I0130 20:39:25.121863   45819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:39:25.137294   45819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:39:25.146157   45819 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:39:25.146213   45819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:25.155323   45819 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:25.155346   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:25.279765   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:26.617419   45819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.337617183s)
	I0130 20:39:26.617457   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:26.825384   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:26.916818   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:27.026546   45819 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:39:27.026647   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:27.527637   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:28.026724   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:28.527352   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:28.578771   45819 api_server.go:72] duration metric: took 1.552227614s to wait for apiserver process to appear ...
	I0130 20:39:28.578793   45819 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:39:28.578814   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:28.579348   45819 api_server.go:269] stopped: https://192.168.39.16:8443/healthz: Get "https://192.168.39.16:8443/healthz": dial tcp 192.168.39.16:8443: connect: connection refused
	I0130 20:39:29.078918   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:26.006018   45441 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:27.506562   45441 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:27.506596   45441 pod_ready.go:81] duration metric: took 3.50852897s waiting for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:27.506609   45441 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:29.514067   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:28.941922   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.942489   44923 main.go:141] libmachine: (no-preload-473743) Found IP for machine: 192.168.50.220
	I0130 20:39:28.942511   44923 main.go:141] libmachine: (no-preload-473743) Reserving static IP address...
	I0130 20:39:28.942528   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has current primary IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.943003   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "no-preload-473743", mac: "52:54:00:c5:07:4a", ip: "192.168.50.220"} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:28.943046   44923 main.go:141] libmachine: (no-preload-473743) DBG | skip adding static IP to network mk-no-preload-473743 - found existing host DHCP lease matching {name: "no-preload-473743", mac: "52:54:00:c5:07:4a", ip: "192.168.50.220"}
	I0130 20:39:28.943063   44923 main.go:141] libmachine: (no-preload-473743) Reserved static IP address: 192.168.50.220
	I0130 20:39:28.943081   44923 main.go:141] libmachine: (no-preload-473743) DBG | Getting to WaitForSSH function...
	I0130 20:39:28.943092   44923 main.go:141] libmachine: (no-preload-473743) Waiting for SSH to be available...
	I0130 20:39:28.945617   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.945983   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:28.946016   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.946192   44923 main.go:141] libmachine: (no-preload-473743) DBG | Using SSH client type: external
	I0130 20:39:28.946224   44923 main.go:141] libmachine: (no-preload-473743) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa (-rw-------)
	I0130 20:39:28.946257   44923 main.go:141] libmachine: (no-preload-473743) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:39:28.946268   44923 main.go:141] libmachine: (no-preload-473743) DBG | About to run SSH command:
	I0130 20:39:28.946279   44923 main.go:141] libmachine: (no-preload-473743) DBG | exit 0
	I0130 20:39:29.047127   44923 main.go:141] libmachine: (no-preload-473743) DBG | SSH cmd err, output: <nil>: 
	I0130 20:39:29.047505   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetConfigRaw
	I0130 20:39:29.048239   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:29.051059   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.051539   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.051572   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.051875   44923 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/config.json ...
	I0130 20:39:29.052098   44923 machine.go:88] provisioning docker machine ...
	I0130 20:39:29.052122   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:29.052328   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetMachineName
	I0130 20:39:29.052480   44923 buildroot.go:166] provisioning hostname "no-preload-473743"
	I0130 20:39:29.052503   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetMachineName
	I0130 20:39:29.052693   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.055532   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.055937   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.055968   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.056075   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.056267   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.056428   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.056644   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.056802   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:29.057242   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:29.057266   44923 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-473743 && echo "no-preload-473743" | sudo tee /etc/hostname
	I0130 20:39:29.199944   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-473743
	
	I0130 20:39:29.199987   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.202960   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.203402   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.203428   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.203648   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.203840   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.203974   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.204101   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.204253   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:29.204787   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:29.204815   44923 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-473743' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-473743/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-473743' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:39:29.343058   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:39:29.343090   44923 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:39:29.343118   44923 buildroot.go:174] setting up certificates
	I0130 20:39:29.343131   44923 provision.go:83] configureAuth start
	I0130 20:39:29.343154   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetMachineName
	I0130 20:39:29.343457   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:29.346265   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.346671   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.346714   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.346889   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.349402   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.349799   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.349830   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.350015   44923 provision.go:138] copyHostCerts
	I0130 20:39:29.350079   44923 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:39:29.350092   44923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:39:29.350146   44923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:39:29.350244   44923 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:39:29.350253   44923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:39:29.350277   44923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:39:29.350343   44923 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:39:29.350354   44923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:39:29.350371   44923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:39:29.350428   44923 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.no-preload-473743 san=[192.168.50.220 192.168.50.220 localhost 127.0.0.1 minikube no-preload-473743]
	I0130 20:39:29.671070   44923 provision.go:172] copyRemoteCerts
	I0130 20:39:29.671125   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:39:29.671150   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.673890   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.674199   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.674234   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.674386   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.674604   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.674744   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.674901   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:29.769184   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:39:29.797035   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 20:39:29.822932   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 20:39:29.849781   44923 provision.go:86] duration metric: configureAuth took 506.627652ms
	I0130 20:39:29.849818   44923 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:39:29.850040   44923 config.go:182] Loaded profile config "no-preload-473743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 20:39:29.850134   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.852709   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.853108   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.853137   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.853331   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.853574   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.853757   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.853924   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.854108   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:29.854635   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:29.854660   44923 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:39:30.232249   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:39:30.232288   44923 machine.go:91] provisioned docker machine in 1.180174143s
	I0130 20:39:30.232302   44923 start.go:300] post-start starting for "no-preload-473743" (driver="kvm2")
	I0130 20:39:30.232321   44923 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:39:30.232348   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.232668   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:39:30.232705   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.235383   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.235716   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.235745   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.235860   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.236049   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.236203   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.236346   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:30.332330   44923 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:39:30.337659   44923 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:39:30.337684   44923 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:39:30.337756   44923 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:39:30.337847   44923 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:39:30.337960   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:39:30.349830   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:30.374759   44923 start.go:303] post-start completed in 142.443985ms
	I0130 20:39:30.374780   44923 fix.go:56] fixHost completed within 23.926338591s
	I0130 20:39:30.374800   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.377807   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.378189   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.378244   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.378414   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.378605   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.378803   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.378954   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.379112   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:30.379649   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:30.379677   44923 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:39:30.512888   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647170.453705676
	
	I0130 20:39:30.512916   44923 fix.go:206] guest clock: 1706647170.453705676
	I0130 20:39:30.512925   44923 fix.go:219] Guest: 2024-01-30 20:39:30.453705676 +0000 UTC Remote: 2024-01-30 20:39:30.374783103 +0000 UTC m=+364.620017880 (delta=78.922573ms)
	I0130 20:39:30.512966   44923 fix.go:190] guest clock delta is within tolerance: 78.922573ms
	I0130 20:39:30.512976   44923 start.go:83] releasing machines lock for "no-preload-473743", held for 24.064563389s
	I0130 20:39:30.513083   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.513387   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:30.516359   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.516699   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.516728   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.516908   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.517590   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.517747   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.517817   44923 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:39:30.517864   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.517954   44923 ssh_runner.go:195] Run: cat /version.json
	I0130 20:39:30.517972   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.520814   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521070   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521202   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.521228   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521456   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.521480   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521480   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.521682   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.521722   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.521844   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.521845   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.521997   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:30.522149   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.522424   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:30.632970   44923 ssh_runner.go:195] Run: systemctl --version
	I0130 20:39:30.638936   44923 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:39:30.784288   44923 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:39:30.792079   44923 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:39:30.792150   44923 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:39:30.809394   44923 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:39:30.809421   44923 start.go:475] detecting cgroup driver to use...
	I0130 20:39:30.809496   44923 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:39:30.824383   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:39:30.838710   44923 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:39:30.838765   44923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:39:30.852928   44923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:39:30.867162   44923 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:39:30.995737   44923 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:39:31.113661   44923 docker.go:233] disabling docker service ...
	I0130 20:39:31.113726   44923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:39:31.127737   44923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:39:31.139320   44923 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:39:31.240000   44923 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:39:31.340063   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:39:31.353303   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:39:31.371834   44923 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:39:31.371889   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.382579   44923 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:39:31.382639   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.392544   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.403023   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.413288   44923 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:39:31.423806   44923 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:39:31.433817   44923 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:39:31.433884   44923 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:39:31.447456   44923 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:39:31.457035   44923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:39:31.562847   44923 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:39:31.752772   44923 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:39:31.752844   44923 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:39:31.757880   44923 start.go:543] Will wait 60s for crictl version
	I0130 20:39:31.757943   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:31.761967   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:39:31.800658   44923 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:39:31.800725   44923 ssh_runner.go:195] Run: crio --version
	I0130 20:39:31.852386   44923 ssh_runner.go:195] Run: crio --version
	I0130 20:39:31.910758   44923 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0130 20:39:30.791795   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:33.292307   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:34.079616   45819 api_server.go:269] stopped: https://192.168.39.16:8443/healthz: Get "https://192.168.39.16:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0130 20:39:34.079674   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:31.516571   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:33.517547   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:31.912241   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:31.915377   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:31.915705   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:31.915735   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:31.915985   44923 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0130 20:39:31.920870   44923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:31.934252   44923 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 20:39:31.934304   44923 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:39:31.975687   44923 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0130 20:39:31.975714   44923 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 20:39:31.975762   44923 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:31.975874   44923 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:31.975900   44923 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:31.975936   44923 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0130 20:39:31.975959   44923 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:31.975903   44923 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:31.976051   44923 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:31.976063   44923 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:31.977466   44923 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:31.977485   44923 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:31.977525   44923 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0130 20:39:31.977531   44923 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:31.977569   44923 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:31.977559   44923 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:31.977663   44923 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:31.977812   44923 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:32.130396   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0130 20:39:32.132105   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:32.135297   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:32.135817   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:32.136698   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:32.154928   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:32.172264   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:32.355420   44923 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0130 20:39:32.355504   44923 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:32.355537   44923 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0130 20:39:32.355580   44923 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:32.355454   44923 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0130 20:39:32.355636   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355675   44923 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:32.355606   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355724   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355763   44923 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0130 20:39:32.355803   44923 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:32.355844   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355855   44923 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0130 20:39:32.355884   44923 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:32.355805   44923 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0130 20:39:32.355928   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355929   44923 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:32.355974   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.360081   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:32.370164   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:32.370202   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:32.370243   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:32.370259   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:32.370374   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:32.466609   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.466714   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.503174   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:32.503293   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:32.507888   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0130 20:39:32.507963   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0130 20:39:32.508061   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0130 20:39:32.508061   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0130 20:39:32.518772   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:32.518883   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0130 20:39:32.518906   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.518932   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:32.518951   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.518824   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0130 20:39:32.518996   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0130 20:39:32.519041   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 20:39:32.521450   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0130 20:39:32.521493   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0130 20:39:32.848844   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:34.579929   44923 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.060972543s)
	I0130 20:39:34.579971   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0130 20:39:34.580001   44923 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.060936502s)
	I0130 20:39:34.580034   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0130 20:39:34.580045   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.061073363s)
	I0130 20:39:34.580059   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0130 20:39:34.580082   44923 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.731208309s)
	I0130 20:39:34.580132   44923 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0130 20:39:34.580088   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:34.580225   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:34.580173   44923 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:34.580343   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:34.585311   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:34.796586   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:34.796615   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:34.796633   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:34.846035   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:34.846071   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:35.079544   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:35.091673   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 20:39:35.091710   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 20:39:35.579233   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:35.587045   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 20:39:35.587071   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 20:39:36.079775   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:36.086927   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0130 20:39:36.095953   45819 api_server.go:141] control plane version: v1.16.0
	I0130 20:39:36.095976   45819 api_server.go:131] duration metric: took 7.517178171s to wait for apiserver health ...
	I0130 20:39:36.095985   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:39:36.095992   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:36.097742   45819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:39:35.792385   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:37.792648   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:36.099012   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:39:36.108427   45819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:39:36.126083   45819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:39:36.138855   45819 system_pods.go:59] 8 kube-system pods found
	I0130 20:39:36.138882   45819 system_pods.go:61] "coredns-5644d7b6d9-547k4" [6b1119fe-9c8a-44fb-b813-58271228b290] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:39:36.138888   45819 system_pods.go:61] "coredns-5644d7b6d9-dtfzh" [4cbd4f36-bc01-4f55-ba50-b7dcdcb35b9b] Running
	I0130 20:39:36.138894   45819 system_pods.go:61] "etcd-old-k8s-version-150971" [22eeed2c-7454-4b9d-8b4d-1c9a2e5feaf7] Running
	I0130 20:39:36.138899   45819 system_pods.go:61] "kube-apiserver-old-k8s-version-150971" [5ef062e6-2f78-485f-9420-e8714128e39f] Running
	I0130 20:39:36.138903   45819 system_pods.go:61] "kube-controller-manager-old-k8s-version-150971" [4e5df6df-486e-47a8-89b8-8d962212ec3e] Running
	I0130 20:39:36.138907   45819 system_pods.go:61] "kube-proxy-ncl7z" [51b28456-0070-46fc-b647-e28d6bdcfde2] Running
	I0130 20:39:36.138914   45819 system_pods.go:61] "kube-scheduler-old-k8s-version-150971" [384c4dfa-180b-4ec3-9e12-3c6d9e581c0e] Running
	I0130 20:39:36.138918   45819 system_pods.go:61] "storage-provisioner" [8a75a04c-1b80-41f6-9dfd-a7ee6f908b9d] Running
	I0130 20:39:36.138928   45819 system_pods.go:74] duration metric: took 12.820934ms to wait for pod list to return data ...
	I0130 20:39:36.138936   45819 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:39:36.142193   45819 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:39:36.142224   45819 node_conditions.go:123] node cpu capacity is 2
	I0130 20:39:36.142236   45819 node_conditions.go:105] duration metric: took 3.295582ms to run NodePressure ...
	I0130 20:39:36.142256   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:36.480656   45819 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:39:36.486153   45819 retry.go:31] will retry after 323.854639ms: kubelet not initialised
	I0130 20:39:36.816707   45819 retry.go:31] will retry after 303.422684ms: kubelet not initialised
	I0130 20:39:37.125369   45819 retry.go:31] will retry after 697.529029ms: kubelet not initialised
	I0130 20:39:37.829322   45819 retry.go:31] will retry after 626.989047ms: kubelet not initialised
	I0130 20:39:38.463635   45819 retry.go:31] will retry after 1.390069174s: kubelet not initialised
	I0130 20:39:35.519218   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:38.013652   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:40.014621   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:37.168054   44923 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.582708254s)
	I0130 20:39:37.168111   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0130 20:39:37.168188   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.587929389s)
	I0130 20:39:37.168204   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0130 20:39:37.168226   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0130 20:39:37.168257   44923 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0130 20:39:37.168330   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0130 20:39:37.173865   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0130 20:39:39.259662   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.091304493s)
	I0130 20:39:39.259692   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0130 20:39:39.259719   44923 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0130 20:39:39.259777   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0130 20:39:40.291441   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:42.292550   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:39.861179   45819 retry.go:31] will retry after 1.194254513s: kubelet not initialised
	I0130 20:39:41.067315   45819 retry.go:31] will retry after 3.766341089s: kubelet not initialised
	I0130 20:39:42.016919   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:44.514681   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:43.804203   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.54440472s)
	I0130 20:39:43.804228   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0130 20:39:43.804262   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:43.804360   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:44.790577   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:46.791751   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:44.839501   45819 retry.go:31] will retry after 2.957753887s: kubelet not initialised
	I0130 20:39:47.802749   45819 retry.go:31] will retry after 4.750837771s: kubelet not initialised
	I0130 20:39:47.016112   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:49.517716   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:46.385349   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.580960989s)
	I0130 20:39:46.385378   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0130 20:39:46.385403   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 20:39:46.385446   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 20:39:48.570468   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.184994355s)
	I0130 20:39:48.570504   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0130 20:39:48.570529   44923 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0130 20:39:48.570575   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0130 20:39:49.318398   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0130 20:39:49.318449   44923 cache_images.go:123] Successfully loaded all cached images
	I0130 20:39:49.318457   44923 cache_images.go:92] LoadImages completed in 17.342728639s
	I0130 20:39:49.318542   44923 ssh_runner.go:195] Run: crio config
	I0130 20:39:49.393074   44923 cni.go:84] Creating CNI manager for ""
	I0130 20:39:49.393094   44923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:49.393116   44923 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:39:49.393143   44923 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.220 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-473743 NodeName:no-preload-473743 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:39:49.393301   44923 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-473743"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:39:49.393384   44923 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-473743 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-473743 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:39:49.393445   44923 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0130 20:39:49.403506   44923 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:39:49.403582   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:39:49.412473   44923 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0130 20:39:49.429600   44923 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0130 20:39:49.445613   44923 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0130 20:39:49.462906   44923 ssh_runner.go:195] Run: grep 192.168.50.220	control-plane.minikube.internal$ /etc/hosts
	I0130 20:39:49.466844   44923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:49.479363   44923 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743 for IP: 192.168.50.220
	I0130 20:39:49.479388   44923 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:39:49.479540   44923 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:39:49.479599   44923 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:39:49.479682   44923 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.key
	I0130 20:39:49.479766   44923 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/apiserver.key.ef9da43a
	I0130 20:39:49.479832   44923 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/proxy-client.key
	I0130 20:39:49.479984   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:39:49.480020   44923 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:39:49.480031   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:39:49.480052   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:39:49.480082   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:39:49.480104   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:39:49.480148   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:49.480782   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:39:49.504588   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 20:39:49.530340   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:39:49.552867   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 20:39:49.575974   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:39:49.598538   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:39:49.623489   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:39:49.646965   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:39:49.671998   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:39:49.695493   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:39:49.718975   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:39:49.741793   44923 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:39:49.758291   44923 ssh_runner.go:195] Run: openssl version
	I0130 20:39:49.765053   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:39:49.775428   44923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:39:49.780081   44923 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:39:49.780130   44923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:39:49.785510   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:39:49.797983   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:39:49.807934   44923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:39:49.812367   44923 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:39:49.812416   44923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:39:49.818021   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:39:49.827603   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:39:49.837248   44923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:49.841789   44923 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:49.841838   44923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:49.847684   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:39:49.857387   44923 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:39:49.862411   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:39:49.871862   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:39:49.877904   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:39:49.883820   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:39:49.890534   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:39:49.898143   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:39:49.905607   44923 kubeadm.go:404] StartCluster: {Name:no-preload-473743 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-473743 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.220 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:39:49.905713   44923 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:39:49.905768   44923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:49.956631   44923 cri.go:89] found id: ""
	I0130 20:39:49.956705   44923 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:39:49.967500   44923 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:39:49.967516   44923 kubeadm.go:636] restartCluster start
	I0130 20:39:49.967572   44923 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:39:49.977077   44923 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:49.978191   44923 kubeconfig.go:92] found "no-preload-473743" server: "https://192.168.50.220:8443"
	I0130 20:39:49.980732   44923 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:39:49.990334   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:49.990377   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:50.001427   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:50.491017   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:50.491080   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:50.503162   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:48.792438   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:51.290002   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:53.291511   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:52.558586   45819 retry.go:31] will retry after 13.209460747s: kubelet not initialised
	I0130 20:39:52.013659   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:54.013756   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:50.991212   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:50.991312   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:51.004155   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:51.491296   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:51.491369   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:51.502771   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:51.991398   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:51.991529   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:52.004164   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:52.490700   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:52.490817   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:52.504616   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:52.991009   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:52.991101   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:53.004178   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:53.490804   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:53.490897   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:53.502856   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:53.990345   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:53.990451   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:54.003812   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:54.491414   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:54.491522   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:54.502969   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:54.991126   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:54.991212   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:55.003001   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:55.490521   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:55.490609   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:55.501901   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:55.791198   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:58.289750   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:56.513098   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:58.514459   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:55.990820   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:55.990893   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:56.002224   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:56.490338   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:56.490432   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:56.502497   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:56.991097   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:56.991189   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:57.002115   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:57.490604   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:57.490681   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:57.501777   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:57.991320   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:57.991419   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:58.002136   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:58.490641   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:58.490724   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:58.502247   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:58.990830   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:58.990951   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:59.001469   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:59.491109   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:59.491223   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:59.502348   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:59.991097   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:59.991182   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:40:00.002945   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:40:00.002978   44923 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:40:00.002986   44923 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:40:00.002996   44923 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:40:00.003068   44923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:40:00.045168   44923 cri.go:89] found id: ""
	I0130 20:40:00.045245   44923 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:40:00.061704   44923 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:40:00.074448   44923 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:40:00.074505   44923 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:40:00.083478   44923 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:40:00.083502   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:00.200934   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:00.791680   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:02.791880   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:00.515342   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:02.515914   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:05.014585   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:00.824616   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:01.029317   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:01.146596   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:01.232362   44923 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:40:01.232439   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:01.733118   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:02.232964   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:02.732910   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:03.232934   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:03.732852   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:03.758730   44923 api_server.go:72] duration metric: took 2.526367424s to wait for apiserver process to appear ...
	I0130 20:40:03.758768   44923 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:40:03.758786   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:05.290228   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:07.290842   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:07.869847   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:40:07.869897   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:40:07.869919   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:07.986795   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:40:07.986841   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:40:08.259140   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:08.265487   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:40:08.265523   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:40:08.759024   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:08.764138   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:40:08.764163   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:40:09.259821   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:09.265120   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 200:
	ok
	I0130 20:40:09.275933   44923 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 20:40:09.275956   44923 api_server.go:131] duration metric: took 5.517181599s to wait for apiserver health ...
	I0130 20:40:09.275965   44923 cni.go:84] Creating CNI manager for ""
	I0130 20:40:09.275971   44923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:40:09.277688   44923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:40:05.773670   45819 retry.go:31] will retry after 17.341210204s: kubelet not initialised
	I0130 20:40:07.014677   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:09.516836   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:09.279058   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:40:09.307862   44923 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:40:09.339259   44923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:40:09.355136   44923 system_pods.go:59] 8 kube-system pods found
	I0130 20:40:09.355177   44923 system_pods.go:61] "coredns-76f75df574-d4c7t" [a8701b4d-0616-4c05-9ba0-0157adae2d13] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:40:09.355185   44923 system_pods.go:61] "etcd-no-preload-473743" [ed931ab3-95d8-4115-ae97-1c274ed8432d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 20:40:09.355194   44923 system_pods.go:61] "kube-apiserver-no-preload-473743" [64b9b17c-6df5-41db-a308-b0deba016c9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 20:40:09.355201   44923 system_pods.go:61] "kube-controller-manager-no-preload-473743" [a28d8dc6-244a-4dfa-9d7f-468281823332] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 20:40:09.355210   44923 system_pods.go:61] "kube-proxy-zklzt" [fa94d19c-b0d6-4e78-86e8-e6b5f3608753] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 20:40:09.355219   44923 system_pods.go:61] "kube-scheduler-no-preload-473743" [b8f8066b-8644-42c3-b47a-52e34210e410] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 20:40:09.355238   44923 system_pods.go:61] "metrics-server-57f55c9bc5-wzb2g" [cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:40:09.355249   44923 system_pods.go:61] "storage-provisioner" [a257b079-cb6e-45fd-b05d-9ad6fa26225e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:40:09.355256   44923 system_pods.go:74] duration metric: took 15.951624ms to wait for pod list to return data ...
	I0130 20:40:09.355277   44923 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:40:09.361985   44923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:40:09.362014   44923 node_conditions.go:123] node cpu capacity is 2
	I0130 20:40:09.362025   44923 node_conditions.go:105] duration metric: took 6.74245ms to run NodePressure ...
	I0130 20:40:09.362045   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:09.678111   44923 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:40:09.687808   44923 kubeadm.go:787] kubelet initialised
	I0130 20:40:09.687828   44923 kubeadm.go:788] duration metric: took 9.689086ms waiting for restarted kubelet to initialise ...
	I0130 20:40:09.687835   44923 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:09.694574   44923 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.700190   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "coredns-76f75df574-d4c7t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.700214   44923 pod_ready.go:81] duration metric: took 5.613522ms waiting for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.700230   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "coredns-76f75df574-d4c7t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.700237   44923 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.705513   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "etcd-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.705534   44923 pod_ready.go:81] duration metric: took 5.286859ms waiting for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.705545   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "etcd-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.705553   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.710360   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-apiserver-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.710378   44923 pod_ready.go:81] duration metric: took 4.814631ms waiting for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.710388   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-apiserver-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.710396   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.746412   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.746447   44923 pod_ready.go:81] duration metric: took 36.037006ms waiting for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.746460   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.746469   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:10.143330   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-proxy-zklzt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.143364   44923 pod_ready.go:81] duration metric: took 396.879081ms waiting for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:10.143377   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-proxy-zklzt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.143385   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:10.549132   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-scheduler-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.549171   44923 pod_ready.go:81] duration metric: took 405.77755ms waiting for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:10.549192   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-scheduler-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.549201   44923 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:10.942777   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.942802   44923 pod_ready.go:81] duration metric: took 393.589996ms waiting for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:10.942811   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.942817   44923 pod_ready.go:38] duration metric: took 1.254975084s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:10.942834   44923 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:40:10.954894   44923 ops.go:34] apiserver oom_adj: -16
	I0130 20:40:10.954916   44923 kubeadm.go:640] restartCluster took 20.987393757s
	I0130 20:40:10.954926   44923 kubeadm.go:406] StartCluster complete in 21.049328159s
	I0130 20:40:10.954944   44923 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:40:10.955025   44923 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:40:10.956906   44923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:40:10.957249   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:40:10.957343   44923 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:40:10.957411   44923 addons.go:69] Setting storage-provisioner=true in profile "no-preload-473743"
	I0130 20:40:10.957434   44923 addons.go:234] Setting addon storage-provisioner=true in "no-preload-473743"
	I0130 20:40:10.957440   44923 addons.go:69] Setting metrics-server=true in profile "no-preload-473743"
	I0130 20:40:10.957447   44923 config.go:182] Loaded profile config "no-preload-473743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	W0130 20:40:10.957451   44923 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:40:10.957471   44923 addons.go:234] Setting addon metrics-server=true in "no-preload-473743"
	W0130 20:40:10.957481   44923 addons.go:243] addon metrics-server should already be in state true
	I0130 20:40:10.957512   44923 host.go:66] Checking if "no-preload-473743" exists ...
	I0130 20:40:10.957522   44923 host.go:66] Checking if "no-preload-473743" exists ...
	I0130 20:40:10.957946   44923 addons.go:69] Setting default-storageclass=true in profile "no-preload-473743"
	I0130 20:40:10.957911   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.958230   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.958246   44923 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-473743"
	I0130 20:40:10.958477   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.958517   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.958600   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.958621   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.962458   44923 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-473743" context rescaled to 1 replicas
	I0130 20:40:10.962497   44923 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.220 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:40:10.964710   44923 out.go:177] * Verifying Kubernetes components...
	I0130 20:40:10.966259   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:40:10.975195   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45125
	I0130 20:40:10.975661   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.976231   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.976262   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.976885   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.977509   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.977542   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.978199   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37815
	I0130 20:40:10.978220   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35309
	I0130 20:40:10.979039   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.979106   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.979581   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.979600   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.979584   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.979663   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.979964   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.980074   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.980160   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:10.980655   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.980690   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.984068   44923 addons.go:234] Setting addon default-storageclass=true in "no-preload-473743"
	W0130 20:40:10.984119   44923 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:40:10.984155   44923 host.go:66] Checking if "no-preload-473743" exists ...
	I0130 20:40:10.984564   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.984615   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.997126   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44921
	I0130 20:40:10.997598   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.997990   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.998006   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.998355   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.998520   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:10.998838   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37151
	I0130 20:40:10.999186   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.999589   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.999604   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:11.000003   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:11.000289   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:11.000433   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:40:11.002723   44923 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:40:11.001789   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:40:11.004317   44923 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:40:11.004329   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:40:11.004345   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:40:11.005791   44923 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:40:11.007234   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:40:11.007246   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:40:11.007259   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:40:11.006415   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I0130 20:40:11.007375   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.007826   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:11.008219   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:40:11.008258   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.008405   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:40:11.008550   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:11.008566   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:11.008624   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:40:11.008780   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:40:11.008900   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:11.008904   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:40:11.009548   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:11.009578   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:11.010414   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.010713   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:40:11.010733   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.010938   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:40:11.011137   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:40:11.011308   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:40:11.011424   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:40:11.047889   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44097
	I0130 20:40:11.048317   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:11.048800   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:11.048820   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:11.049210   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:11.049451   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:11.051439   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:40:11.052012   44923 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:40:11.052030   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:40:11.052049   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:40:11.055336   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.055865   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:40:11.055888   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.055976   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:40:11.056175   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:40:11.056344   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:40:11.056461   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:40:11.176670   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:40:11.176694   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:40:11.182136   44923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:40:11.194238   44923 node_ready.go:35] waiting up to 6m0s for node "no-preload-473743" to be "Ready" ...
	I0130 20:40:11.194301   44923 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0130 20:40:11.213877   44923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:40:11.222566   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:40:11.222591   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:40:11.264089   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:40:11.264119   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:40:11.337758   44923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:40:12.237415   44923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.055244284s)
	I0130 20:40:12.237483   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.237482   44923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.023570997s)
	I0130 20:40:12.237504   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.237521   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.237538   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.237867   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.237927   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.237949   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.237964   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.237973   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.237986   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.238018   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.238030   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.238303   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.238319   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.238415   44923 main.go:141] libmachine: (no-preload-473743) DBG | Closing plugin on server side
	I0130 20:40:12.238473   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.238485   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.245407   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.245432   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.245653   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.245670   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.287632   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.287660   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.287973   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.287998   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.288000   44923 main.go:141] libmachine: (no-preload-473743) DBG | Closing plugin on server side
	I0130 20:40:12.288014   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.288024   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.288266   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.288286   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.288297   44923 addons.go:470] Verifying addon metrics-server=true in "no-preload-473743"
	I0130 20:40:12.288352   44923 main.go:141] libmachine: (no-preload-473743) DBG | Closing plugin on server side
	I0130 20:40:12.290298   44923 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 20:40:09.291762   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:11.791994   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:12.016265   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:14.515097   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:12.291867   44923 addons.go:505] enable addons completed in 1.334521495s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 20:40:13.200767   44923 node_ready.go:58] node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:15.699345   44923 node_ready.go:58] node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:14.291583   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:16.292248   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:17.014332   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:19.014556   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:18.198624   44923 node_ready.go:58] node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:18.699015   44923 node_ready.go:49] node "no-preload-473743" has status "Ready":"True"
	I0130 20:40:18.699041   44923 node_ready.go:38] duration metric: took 7.504770144s waiting for node "no-preload-473743" to be "Ready" ...
	I0130 20:40:18.699050   44923 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:18.709647   44923 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.718022   44923 pod_ready.go:92] pod "coredns-76f75df574-d4c7t" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:18.718046   44923 pod_ready.go:81] duration metric: took 8.370541ms waiting for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.718054   44923 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.722992   44923 pod_ready.go:92] pod "etcd-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:18.723012   44923 pod_ready.go:81] duration metric: took 4.951762ms waiting for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.723020   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:20.732288   44923 pod_ready.go:102] pod "kube-apiserver-no-preload-473743" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:18.791445   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:21.290205   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:23.123817   45819 kubeadm.go:787] kubelet initialised
	I0130 20:40:23.123842   45819 kubeadm.go:788] duration metric: took 46.643164333s waiting for restarted kubelet to initialise ...
	I0130 20:40:23.123849   45819 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:23.128282   45819 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-547k4" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.132665   45819 pod_ready.go:92] pod "coredns-5644d7b6d9-547k4" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.132688   45819 pod_ready.go:81] duration metric: took 4.375362ms waiting for pod "coredns-5644d7b6d9-547k4" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.132701   45819 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-dtfzh" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.137072   45819 pod_ready.go:92] pod "coredns-5644d7b6d9-dtfzh" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.137092   45819 pod_ready.go:81] duration metric: took 4.379467ms waiting for pod "coredns-5644d7b6d9-dtfzh" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.137102   45819 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.142038   45819 pod_ready.go:92] pod "etcd-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.142058   45819 pod_ready.go:81] duration metric: took 4.949104ms waiting for pod "etcd-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.142070   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.146657   45819 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.146676   45819 pod_ready.go:81] duration metric: took 4.598238ms waiting for pod "kube-apiserver-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.146686   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.518159   45819 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.518189   45819 pod_ready.go:81] duration metric: took 371.488133ms waiting for pod "kube-controller-manager-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.518203   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ncl7z" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.919594   45819 pod_ready.go:92] pod "kube-proxy-ncl7z" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.919628   45819 pod_ready.go:81] duration metric: took 401.417322ms waiting for pod "kube-proxy-ncl7z" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.919644   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:24.318125   45819 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:24.318152   45819 pod_ready.go:81] duration metric: took 398.499457ms waiting for pod "kube-scheduler-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:24.318166   45819 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.513600   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:23.514060   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:21.233466   44923 pod_ready.go:92] pod "kube-apiserver-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:21.233494   44923 pod_ready.go:81] duration metric: took 2.510466903s waiting for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.233507   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.240688   44923 pod_ready.go:92] pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:21.240709   44923 pod_ready.go:81] duration metric: took 7.194165ms waiting for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.240721   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.248250   44923 pod_ready.go:92] pod "kube-proxy-zklzt" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:21.248271   44923 pod_ready.go:81] duration metric: took 7.542304ms waiting for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.248278   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.256673   44923 pod_ready.go:92] pod "kube-scheduler-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.256700   44923 pod_ready.go:81] duration metric: took 2.008414366s waiting for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.256712   44923 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:25.263480   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:23.790334   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:26.290232   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:28.292270   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:26.324649   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:28.825120   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:26.016305   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:28.513650   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:27.264434   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:29.764240   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:30.793210   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:33.292255   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:31.326850   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:33.824698   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:30.514448   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:32.518435   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:35.013676   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:32.264144   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:34.763689   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:35.789964   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:37.791095   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:35.825018   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:38.326094   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:37.014222   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:39.517868   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:37.265137   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:39.764115   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:40.290332   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.290850   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:40.327135   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.824370   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.014917   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.516872   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.264387   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.265504   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.291131   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:46.790512   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.827108   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:47.327816   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:46.518922   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:49.014136   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:46.765151   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:49.265178   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:48.790952   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:51.291730   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:49.824442   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:52.325401   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:51.014513   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:53.518388   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:51.266567   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:53.764501   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:53.789915   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:55.790332   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:57.791445   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:54.825612   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:57.324364   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:59.327308   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:56.020804   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:58.515544   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:56.263707   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:58.264200   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:00.264261   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:59.792066   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:02.289879   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:01.824631   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:03.824749   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:01.014649   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:03.014805   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:05.017318   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:02.763825   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:04.764040   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:04.290927   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:06.791853   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:06.326570   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:08.824889   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:07.516190   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:10.018532   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:06.765257   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:09.263466   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:09.290744   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:11.791416   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:10.825025   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:13.324947   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:12.514850   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:14.522700   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:11.263911   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:13.763429   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:15.766371   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:14.289786   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:16.291753   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:15.325297   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:17.824762   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:17.014087   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:19.518139   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:18.263727   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:20.263854   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:18.791517   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:21.292155   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:19.825751   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:22.324733   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:21.518205   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:24.015562   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:22.767815   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:25.263283   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:23.790847   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:26.290464   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:24.824063   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:26.825938   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:29.325683   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:26.016724   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:28.514670   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:27.264429   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:29.264577   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:28.791861   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:31.291558   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:31.824367   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.824771   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:30.515432   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.014091   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:31.265902   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.764211   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:35.764788   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.791968   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:36.290991   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:38.291383   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:35.824891   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:37.825500   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:35.514120   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:37.514579   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:39.516165   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:37.765006   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:40.263816   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:40.791224   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.792487   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:40.326148   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.825282   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.014531   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:44.514337   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.264845   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:44.764275   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:45.290370   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:47.790557   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:45.325184   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:47.825091   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:46.515035   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:49.013829   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:47.263752   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:49.263882   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:49.790715   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:52.291348   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:50.326963   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:52.825278   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:51.014381   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:53.016755   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:51.264167   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:53.264888   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:55.265000   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:54.291846   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:56.790351   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:55.325156   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:57.325446   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:59.326114   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:55.515866   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:58.013768   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:00.014052   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:57.763548   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:59.764374   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:58.790584   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:01.294420   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:01.827046   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:04.325425   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:02.514100   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:04.516981   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:02.264420   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:04.264851   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:03.790918   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:06.290560   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:08.291334   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:06.824232   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:08.824527   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:07.014375   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:09.513980   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:06.764222   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:09.264299   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:10.292477   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:12.795626   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:10.825706   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:13.325572   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:11.514369   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:14.016090   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:11.264881   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:13.763625   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:15.764616   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:15.290292   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:17.790263   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:15.326185   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:17.826504   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:16.518263   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:19.014219   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:18.265723   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:20.764663   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:19.792068   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:22.292221   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:20.325069   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:22.326307   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:21.014811   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:23.014876   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:25.017016   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:23.264098   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:25.267065   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:24.791616   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:27.291739   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:24.825416   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:26.826380   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:29.325717   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:27.513692   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:30.015246   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:27.763938   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:29.764135   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:29.789997   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:31.790272   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:31.825466   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:33.826959   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:32.513718   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:35.014948   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:31.780185   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:34.265062   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:33.790477   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:36.290139   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:38.291801   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:36.325475   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:38.825210   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:37.513778   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:39.518155   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:36.764137   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:38.765005   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:40.790050   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:42.791739   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:41.325239   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:43.826300   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:42.013844   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:44.014396   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:41.268687   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:43.765101   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:45.290120   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:47.291365   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:46.325321   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:48.824944   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:46.015721   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:48.514689   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:46.269498   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:48.763780   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:50.765289   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:49.790212   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:52.291090   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:51.324622   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:53.324873   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:51.015934   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:53.016171   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:52.765777   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:55.264419   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:54.292666   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:56.790098   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:55.825230   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:58.324546   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:55.514240   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:58.014796   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:57.764094   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:59.764594   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:58.790445   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:00.790844   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:03.290632   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:00.325916   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:02.824174   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:00.514203   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:02.515317   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:05.018840   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:01.767672   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:04.263736   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:04.290221   45037 pod_ready.go:81] duration metric: took 4m0.006974938s waiting for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	E0130 20:43:04.290244   45037 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 20:43:04.290252   45037 pod_ready.go:38] duration metric: took 4m4.550384705s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:43:04.290265   45037 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:43:04.290289   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:43:04.290330   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:43:04.354567   45037 cri.go:89] found id: "f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:04.354594   45037 cri.go:89] found id: ""
	I0130 20:43:04.354603   45037 logs.go:276] 1 containers: [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d]
	I0130 20:43:04.354664   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.359890   45037 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:43:04.359961   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:43:04.399415   45037 cri.go:89] found id: "0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:04.399437   45037 cri.go:89] found id: ""
	I0130 20:43:04.399444   45037 logs.go:276] 1 containers: [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18]
	I0130 20:43:04.399484   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.404186   45037 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:43:04.404241   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:43:04.445968   45037 cri.go:89] found id: "4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:04.445994   45037 cri.go:89] found id: ""
	I0130 20:43:04.446003   45037 logs.go:276] 1 containers: [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d]
	I0130 20:43:04.446060   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.450215   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:43:04.450285   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:43:04.492438   45037 cri.go:89] found id: "74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:04.492462   45037 cri.go:89] found id: ""
	I0130 20:43:04.492476   45037 logs.go:276] 1 containers: [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f]
	I0130 20:43:04.492537   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.497227   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:43:04.497301   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:43:04.535936   45037 cri.go:89] found id: "cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:04.535960   45037 cri.go:89] found id: ""
	I0130 20:43:04.535970   45037 logs.go:276] 1 containers: [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254]
	I0130 20:43:04.536026   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.540968   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:43:04.541046   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:43:04.584192   45037 cri.go:89] found id: "b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:04.584214   45037 cri.go:89] found id: ""
	I0130 20:43:04.584222   45037 logs.go:276] 1 containers: [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2]
	I0130 20:43:04.584280   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.588842   45037 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:43:04.588914   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:43:04.630957   45037 cri.go:89] found id: ""
	I0130 20:43:04.630984   45037 logs.go:276] 0 containers: []
	W0130 20:43:04.630994   45037 logs.go:278] No container was found matching "kindnet"
	I0130 20:43:04.631000   45037 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:43:04.631057   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:43:04.672712   45037 cri.go:89] found id: "84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:04.672741   45037 cri.go:89] found id: "5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:04.672747   45037 cri.go:89] found id: ""
	I0130 20:43:04.672757   45037 logs.go:276] 2 containers: [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5]
	I0130 20:43:04.672830   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.677537   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.681806   45037 logs.go:123] Gathering logs for kube-scheduler [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f] ...
	I0130 20:43:04.681833   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:04.743389   45037 logs.go:123] Gathering logs for kube-proxy [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254] ...
	I0130 20:43:04.743420   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:04.783857   45037 logs.go:123] Gathering logs for etcd [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18] ...
	I0130 20:43:04.783891   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:04.838800   45037 logs.go:123] Gathering logs for container status ...
	I0130 20:43:04.838827   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:43:04.897337   45037 logs.go:123] Gathering logs for kubelet ...
	I0130 20:43:04.897361   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:43:04.954337   45037 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:43:04.954367   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:43:05.110447   45037 logs.go:123] Gathering logs for kube-controller-manager [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2] ...
	I0130 20:43:05.110476   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:05.169238   45037 logs.go:123] Gathering logs for storage-provisioner [5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5] ...
	I0130 20:43:05.169275   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:05.209860   45037 logs.go:123] Gathering logs for dmesg ...
	I0130 20:43:05.209890   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:43:05.224272   45037 logs.go:123] Gathering logs for coredns [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d] ...
	I0130 20:43:05.224296   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:05.264818   45037 logs.go:123] Gathering logs for storage-provisioner [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac] ...
	I0130 20:43:05.264857   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:05.304626   45037 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:43:05.304657   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:43:05.748336   45037 logs.go:123] Gathering logs for kube-apiserver [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d] ...
	I0130 20:43:05.748377   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:08.306639   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:43:08.324001   45037 api_server.go:72] duration metric: took 4m16.400279002s to wait for apiserver process to appear ...
	I0130 20:43:08.324028   45037 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:43:08.324061   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:43:08.324111   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:43:08.364000   45037 cri.go:89] found id: "f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:08.364026   45037 cri.go:89] found id: ""
	I0130 20:43:08.364036   45037 logs.go:276] 1 containers: [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d]
	I0130 20:43:08.364093   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.368770   45037 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:43:08.368843   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:43:08.411371   45037 cri.go:89] found id: "0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:08.411394   45037 cri.go:89] found id: ""
	I0130 20:43:08.411404   45037 logs.go:276] 1 containers: [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18]
	I0130 20:43:08.411462   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.415582   45037 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:43:08.415648   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:43:08.455571   45037 cri.go:89] found id: "4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:08.455601   45037 cri.go:89] found id: ""
	I0130 20:43:08.455612   45037 logs.go:276] 1 containers: [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d]
	I0130 20:43:08.455662   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.459908   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:43:08.459972   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:43:08.497350   45037 cri.go:89] found id: "74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:08.497374   45037 cri.go:89] found id: ""
	I0130 20:43:08.497383   45037 logs.go:276] 1 containers: [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f]
	I0130 20:43:08.497441   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.501504   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:43:08.501552   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:43:08.550031   45037 cri.go:89] found id: "cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:08.550057   45037 cri.go:89] found id: ""
	I0130 20:43:08.550066   45037 logs.go:276] 1 containers: [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254]
	I0130 20:43:08.550181   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.555166   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:43:08.555215   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:43:08.590903   45037 cri.go:89] found id: "b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:08.590929   45037 cri.go:89] found id: ""
	I0130 20:43:08.590939   45037 logs.go:276] 1 containers: [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2]
	I0130 20:43:08.590997   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.594837   45037 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:43:08.594888   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:43:08.630989   45037 cri.go:89] found id: ""
	I0130 20:43:08.631015   45037 logs.go:276] 0 containers: []
	W0130 20:43:08.631024   45037 logs.go:278] No container was found matching "kindnet"
	I0130 20:43:08.631029   45037 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:43:08.631072   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:43:08.669579   45037 cri.go:89] found id: "84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:08.669603   45037 cri.go:89] found id: "5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:08.669609   45037 cri.go:89] found id: ""
	I0130 20:43:08.669617   45037 logs.go:276] 2 containers: [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5]
	I0130 20:43:08.669666   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.673938   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.677733   45037 logs.go:123] Gathering logs for kubelet ...
	I0130 20:43:08.677757   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:43:08.726492   45037 logs.go:123] Gathering logs for dmesg ...
	I0130 20:43:08.726519   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:43:04.825623   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:07.331997   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:07.514074   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:09.514484   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:06.264040   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:08.264505   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:10.764072   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:08.740624   45037 logs.go:123] Gathering logs for kube-controller-manager [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2] ...
	I0130 20:43:08.740645   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:08.792517   45037 logs.go:123] Gathering logs for kube-scheduler [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f] ...
	I0130 20:43:08.792547   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:08.829131   45037 logs.go:123] Gathering logs for storage-provisioner [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac] ...
	I0130 20:43:08.829166   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:08.870777   45037 logs.go:123] Gathering logs for storage-provisioner [5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5] ...
	I0130 20:43:08.870802   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:08.909648   45037 logs.go:123] Gathering logs for kube-apiserver [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d] ...
	I0130 20:43:08.909678   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:08.953671   45037 logs.go:123] Gathering logs for coredns [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d] ...
	I0130 20:43:08.953701   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:08.989624   45037 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:43:08.989648   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:43:09.383141   45037 logs.go:123] Gathering logs for container status ...
	I0130 20:43:09.383174   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:43:09.442685   45037 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:43:09.442719   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:43:09.563370   45037 logs.go:123] Gathering logs for etcd [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18] ...
	I0130 20:43:09.563398   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:09.614390   45037 logs.go:123] Gathering logs for kube-proxy [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254] ...
	I0130 20:43:09.614422   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:12.156906   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:43:12.161980   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 200:
	ok
	I0130 20:43:12.163284   45037 api_server.go:141] control plane version: v1.28.4
	I0130 20:43:12.163308   45037 api_server.go:131] duration metric: took 3.839271753s to wait for apiserver health ...
	I0130 20:43:12.163318   45037 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:43:12.163343   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:43:12.163389   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:43:12.207351   45037 cri.go:89] found id: "f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:12.207372   45037 cri.go:89] found id: ""
	I0130 20:43:12.207381   45037 logs.go:276] 1 containers: [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d]
	I0130 20:43:12.207436   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.213923   45037 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:43:12.213987   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:43:12.263647   45037 cri.go:89] found id: "0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:12.263680   45037 cri.go:89] found id: ""
	I0130 20:43:12.263690   45037 logs.go:276] 1 containers: [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18]
	I0130 20:43:12.263743   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.268327   45037 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:43:12.268381   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:43:12.310594   45037 cri.go:89] found id: "4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:12.310614   45037 cri.go:89] found id: ""
	I0130 20:43:12.310622   45037 logs.go:276] 1 containers: [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d]
	I0130 20:43:12.310670   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.315134   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:43:12.315185   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:43:12.359384   45037 cri.go:89] found id: "74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:12.359404   45037 cri.go:89] found id: ""
	I0130 20:43:12.359412   45037 logs.go:276] 1 containers: [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f]
	I0130 20:43:12.359468   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.363796   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:43:12.363856   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:43:12.399741   45037 cri.go:89] found id: "cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:12.399771   45037 cri.go:89] found id: ""
	I0130 20:43:12.399783   45037 logs.go:276] 1 containers: [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254]
	I0130 20:43:12.399844   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.404237   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:43:12.404302   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:43:12.457772   45037 cri.go:89] found id: "b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:12.457806   45037 cri.go:89] found id: ""
	I0130 20:43:12.457816   45037 logs.go:276] 1 containers: [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2]
	I0130 20:43:12.457876   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.462316   45037 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:43:12.462378   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:43:12.499660   45037 cri.go:89] found id: ""
	I0130 20:43:12.499690   45037 logs.go:276] 0 containers: []
	W0130 20:43:12.499699   45037 logs.go:278] No container was found matching "kindnet"
	I0130 20:43:12.499707   45037 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:43:12.499763   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:43:12.548931   45037 cri.go:89] found id: "84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:12.548961   45037 cri.go:89] found id: "5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:12.548969   45037 cri.go:89] found id: ""
	I0130 20:43:12.548978   45037 logs.go:276] 2 containers: [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5]
	I0130 20:43:12.549037   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.552983   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.557322   45037 logs.go:123] Gathering logs for container status ...
	I0130 20:43:12.557340   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:43:12.599784   45037 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:43:12.599812   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:43:12.716124   45037 logs.go:123] Gathering logs for kube-apiserver [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d] ...
	I0130 20:43:12.716156   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:12.766940   45037 logs.go:123] Gathering logs for storage-provisioner [5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5] ...
	I0130 20:43:12.766980   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:12.804026   45037 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:43:12.804059   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:43:13.165109   45037 logs.go:123] Gathering logs for coredns [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d] ...
	I0130 20:43:13.165153   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:13.204652   45037 logs.go:123] Gathering logs for kube-scheduler [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f] ...
	I0130 20:43:13.204679   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:13.242644   45037 logs.go:123] Gathering logs for storage-provisioner [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac] ...
	I0130 20:43:13.242675   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:13.282527   45037 logs.go:123] Gathering logs for kubelet ...
	I0130 20:43:13.282558   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:43:13.335128   45037 logs.go:123] Gathering logs for etcd [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18] ...
	I0130 20:43:13.335156   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:13.385564   45037 logs.go:123] Gathering logs for kube-controller-manager [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2] ...
	I0130 20:43:13.385599   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:13.449564   45037 logs.go:123] Gathering logs for dmesg ...
	I0130 20:43:13.449603   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:43:13.464376   45037 logs.go:123] Gathering logs for kube-proxy [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254] ...
	I0130 20:43:13.464406   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:09.825882   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:11.827628   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:14.325309   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:12.012894   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:14.014496   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:12.765167   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:14.765356   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:16.017083   45037 system_pods.go:59] 8 kube-system pods found
	I0130 20:43:16.017121   45037 system_pods.go:61] "coredns-5dd5756b68-jqzzv" [59f362b6-606e-4bcd-b5eb-c8822aaf8b9c] Running
	I0130 20:43:16.017128   45037 system_pods.go:61] "etcd-embed-certs-208583" [798094bf-2aac-4f39-afc1-4f873bdd08ee] Running
	I0130 20:43:16.017135   45037 system_pods.go:61] "kube-apiserver-embed-certs-208583" [b96b9f6e-b36a-47bf-8f6d-01f883501766] Running
	I0130 20:43:16.017141   45037 system_pods.go:61] "kube-controller-manager-embed-certs-208583" [3dbd9e29-5c95-40f5-acd8-9767f6ce7a03] Running
	I0130 20:43:16.017148   45037 system_pods.go:61] "kube-proxy-g7q5t" [47f109e0-7a56-472f-8c7e-ba2b138de352] Running
	I0130 20:43:16.017154   45037 system_pods.go:61] "kube-scheduler-embed-certs-208583" [e8a37eb1-599f-478f-bbc1-b44b1020f291] Running
	I0130 20:43:16.017165   45037 system_pods.go:61] "metrics-server-57f55c9bc5-ghg9n" [37700115-83e9-440a-b396-56f50adb6311] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:43:16.017172   45037 system_pods.go:61] "storage-provisioner" [15108916-a630-4208-99f7-5706db407b22] Running
	I0130 20:43:16.017185   45037 system_pods.go:74] duration metric: took 3.853859786s to wait for pod list to return data ...
	I0130 20:43:16.017198   45037 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:43:16.019949   45037 default_sa.go:45] found service account: "default"
	I0130 20:43:16.019967   45037 default_sa.go:55] duration metric: took 2.760881ms for default service account to be created ...
	I0130 20:43:16.019976   45037 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:43:16.025198   45037 system_pods.go:86] 8 kube-system pods found
	I0130 20:43:16.025219   45037 system_pods.go:89] "coredns-5dd5756b68-jqzzv" [59f362b6-606e-4bcd-b5eb-c8822aaf8b9c] Running
	I0130 20:43:16.025225   45037 system_pods.go:89] "etcd-embed-certs-208583" [798094bf-2aac-4f39-afc1-4f873bdd08ee] Running
	I0130 20:43:16.025229   45037 system_pods.go:89] "kube-apiserver-embed-certs-208583" [b96b9f6e-b36a-47bf-8f6d-01f883501766] Running
	I0130 20:43:16.025234   45037 system_pods.go:89] "kube-controller-manager-embed-certs-208583" [3dbd9e29-5c95-40f5-acd8-9767f6ce7a03] Running
	I0130 20:43:16.025238   45037 system_pods.go:89] "kube-proxy-g7q5t" [47f109e0-7a56-472f-8c7e-ba2b138de352] Running
	I0130 20:43:16.025242   45037 system_pods.go:89] "kube-scheduler-embed-certs-208583" [e8a37eb1-599f-478f-bbc1-b44b1020f291] Running
	I0130 20:43:16.025248   45037 system_pods.go:89] "metrics-server-57f55c9bc5-ghg9n" [37700115-83e9-440a-b396-56f50adb6311] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:43:16.025258   45037 system_pods.go:89] "storage-provisioner" [15108916-a630-4208-99f7-5706db407b22] Running
	I0130 20:43:16.025264   45037 system_pods.go:126] duration metric: took 5.282813ms to wait for k8s-apps to be running ...
	I0130 20:43:16.025270   45037 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:43:16.025309   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:43:16.043415   45037 system_svc.go:56] duration metric: took 18.134458ms WaitForService to wait for kubelet.
	I0130 20:43:16.043443   45037 kubeadm.go:581] duration metric: took 4m24.119724167s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:43:16.043472   45037 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:43:16.046999   45037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:43:16.047021   45037 node_conditions.go:123] node cpu capacity is 2
	I0130 20:43:16.047035   45037 node_conditions.go:105] duration metric: took 3.556321ms to run NodePressure ...
	I0130 20:43:16.047048   45037 start.go:228] waiting for startup goroutines ...
	I0130 20:43:16.047061   45037 start.go:233] waiting for cluster config update ...
	I0130 20:43:16.047078   45037 start.go:242] writing updated cluster config ...
	I0130 20:43:16.047368   45037 ssh_runner.go:195] Run: rm -f paused
	I0130 20:43:16.098760   45037 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 20:43:16.100739   45037 out.go:177] * Done! kubectl is now configured to use "embed-certs-208583" cluster and "default" namespace by default
	I0130 20:43:16.326450   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:18.824456   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:16.514335   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:19.014528   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:17.264059   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:19.264543   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:20.824649   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:23.324731   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:21.014634   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:23.513609   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:21.763771   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:23.764216   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:25.325575   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:27.825708   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:25.514335   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:27.506991   45441 pod_ready.go:81] duration metric: took 4m0.000368672s waiting for pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace to be "Ready" ...
	E0130 20:43:27.507020   45441 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 20:43:27.507037   45441 pod_ready.go:38] duration metric: took 4m11.059827725s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:43:27.507060   45441 kubeadm.go:640] restartCluster took 4m33.680532974s
	W0130 20:43:27.507128   45441 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 20:43:27.507159   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 20:43:26.264077   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:28.264502   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:30.764952   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:30.325157   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:32.325570   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:32.766530   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:35.264541   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:34.825545   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:36.825757   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:38.825922   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:37.764613   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:39.772391   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:41.253066   45441 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.745883202s)
	I0130 20:43:41.253138   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:43:41.267139   45441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:43:41.276814   45441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:43:41.286633   45441 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:43:41.286678   45441 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 20:43:41.340190   45441 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0130 20:43:41.340255   45441 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 20:43:41.491373   45441 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 20:43:41.491524   45441 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 20:43:41.491644   45441 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 20:43:41.735659   45441 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 20:43:41.737663   45441 out.go:204]   - Generating certificates and keys ...
	I0130 20:43:41.737778   45441 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 20:43:41.737875   45441 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 20:43:41.737961   45441 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 20:43:41.738034   45441 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 20:43:41.738116   45441 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 20:43:41.738215   45441 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 20:43:41.738295   45441 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 20:43:41.738381   45441 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 20:43:41.738481   45441 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 20:43:41.738542   45441 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 20:43:41.738578   45441 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 20:43:41.738633   45441 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 20:43:41.894828   45441 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 20:43:42.122408   45441 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 20:43:42.406185   45441 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 20:43:42.526794   45441 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 20:43:42.527473   45441 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 20:43:42.529906   45441 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 20:43:40.826403   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:43.324650   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:42.531956   45441 out.go:204]   - Booting up control plane ...
	I0130 20:43:42.532077   45441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 20:43:42.532175   45441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 20:43:42.532276   45441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 20:43:42.550440   45441 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 20:43:42.551432   45441 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 20:43:42.551515   45441 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 20:43:42.666449   45441 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 20:43:42.265430   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:44.268768   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:45.325121   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:47.325585   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:46.768728   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:49.264313   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:50.670814   45441 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004172 seconds
	I0130 20:43:50.670940   45441 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 20:43:50.693878   45441 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 20:43:51.228257   45441 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 20:43:51.228498   45441 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-877742 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 20:43:51.743336   45441 kubeadm.go:322] [bootstrap-token] Using token: hhyk9t.fiwckj4dbaljm18s
	I0130 20:43:51.744898   45441 out.go:204]   - Configuring RBAC rules ...
	I0130 20:43:51.744996   45441 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 20:43:51.755911   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 20:43:51.769124   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 20:43:51.773192   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 20:43:51.776643   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 20:43:51.780261   45441 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 20:43:51.807541   45441 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 20:43:52.070376   45441 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 20:43:52.167958   45441 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 20:43:52.167994   45441 kubeadm.go:322] 
	I0130 20:43:52.168050   45441 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 20:43:52.168061   45441 kubeadm.go:322] 
	I0130 20:43:52.168142   45441 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 20:43:52.168157   45441 kubeadm.go:322] 
	I0130 20:43:52.168193   45441 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 20:43:52.168254   45441 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 20:43:52.168325   45441 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 20:43:52.168336   45441 kubeadm.go:322] 
	I0130 20:43:52.168399   45441 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 20:43:52.168409   45441 kubeadm.go:322] 
	I0130 20:43:52.168469   45441 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 20:43:52.168480   45441 kubeadm.go:322] 
	I0130 20:43:52.168546   45441 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 20:43:52.168639   45441 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 20:43:52.168731   45441 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 20:43:52.168741   45441 kubeadm.go:322] 
	I0130 20:43:52.168834   45441 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 20:43:52.168928   45441 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 20:43:52.168938   45441 kubeadm.go:322] 
	I0130 20:43:52.169033   45441 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token hhyk9t.fiwckj4dbaljm18s \
	I0130 20:43:52.169145   45441 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 \
	I0130 20:43:52.169175   45441 kubeadm.go:322] 	--control-plane 
	I0130 20:43:52.169185   45441 kubeadm.go:322] 
	I0130 20:43:52.169274   45441 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 20:43:52.169283   45441 kubeadm.go:322] 
	I0130 20:43:52.169374   45441 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token hhyk9t.fiwckj4dbaljm18s \
	I0130 20:43:52.169485   45441 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 
	I0130 20:43:52.170103   45441 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 20:43:52.170128   45441 cni.go:84] Creating CNI manager for ""
	I0130 20:43:52.170138   45441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:43:52.171736   45441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:43:49.827004   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:51.828301   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:54.324951   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:52.173096   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:43:52.207763   45441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:43:52.239391   45441 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:43:52.239528   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:52.239550   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218 minikube.k8s.io/name=default-k8s-diff-port-877742 minikube.k8s.io/updated_at=2024_01_30T20_43_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:52.359837   45441 ops.go:34] apiserver oom_adj: -16
	I0130 20:43:52.622616   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:53.123165   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:53.622655   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:54.122819   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:54.623579   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:55.122784   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:51.265017   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:53.765449   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:56.826059   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:59.324992   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:55.622980   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:56.123436   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:56.623691   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:57.122685   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:57.623150   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:58.123358   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:58.623234   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:59.122804   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:59.623408   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:00.122730   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:56.264593   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:58.764827   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:00.765740   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:01.325185   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:03.325582   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:00.622649   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:01.123007   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:01.623488   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:02.123117   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:02.623186   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:03.122987   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:03.623625   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:04.123576   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:04.623493   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:05.123073   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:05.292330   45441 kubeadm.go:1088] duration metric: took 13.052870929s to wait for elevateKubeSystemPrivileges.
	I0130 20:44:05.292359   45441 kubeadm.go:406] StartCluster complete in 5m11.519002976s
	I0130 20:44:05.292376   45441 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:05.292446   45441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:44:05.294511   45441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:05.296490   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:44:05.296705   45441 config.go:182] Loaded profile config "default-k8s-diff-port-877742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:44:05.296739   45441 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:44:05.296797   45441 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-877742"
	I0130 20:44:05.296814   45441 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-877742"
	W0130 20:44:05.296823   45441 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:44:05.296872   45441 host.go:66] Checking if "default-k8s-diff-port-877742" exists ...
	I0130 20:44:05.297028   45441 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-877742"
	I0130 20:44:05.297068   45441 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-877742"
	I0130 20:44:05.297257   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.297282   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.297449   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.297476   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.297476   45441 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-877742"
	I0130 20:44:05.297498   45441 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-877742"
	W0130 20:44:05.297512   45441 addons.go:243] addon metrics-server should already be in state true
	I0130 20:44:05.297557   45441 host.go:66] Checking if "default-k8s-diff-port-877742" exists ...
	I0130 20:44:05.297934   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.297972   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.314618   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0130 20:44:05.314913   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34557
	I0130 20:44:05.315139   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.315638   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.315718   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.315751   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.316139   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.316295   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.316318   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.316342   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39221
	I0130 20:44:05.316649   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.316695   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.316729   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.316842   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.317131   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.317573   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.317598   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.317967   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.318507   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.318539   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.321078   45441 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-877742"
	W0130 20:44:05.321104   45441 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:44:05.321129   45441 host.go:66] Checking if "default-k8s-diff-port-877742" exists ...
	I0130 20:44:05.321503   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.321530   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.338144   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33785
	I0130 20:44:05.338180   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I0130 20:44:05.338717   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.338798   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.339318   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.339325   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.339343   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.339345   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.339804   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.339819   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.339987   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.340017   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.340889   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33925
	I0130 20:44:05.341348   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.341847   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.341870   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.342243   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:44:05.342328   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:44:05.344137   45441 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:44:05.342641   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.344745   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.345833   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:44:05.345871   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:44:05.345889   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:44:05.345936   45441 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:44:05.347567   45441 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:05.347585   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:44:05.347602   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:44:05.346048   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.348959   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.349635   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:44:05.349686   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.349853   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:44:05.350119   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:44:05.350404   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:44:05.350619   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:44:05.351435   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.351548   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:44:05.351565   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.351753   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:44:05.351924   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:44:05.352094   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:44:05.352237   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:44:05.366786   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40645
	I0130 20:44:05.367211   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.367744   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.367768   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.368174   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.368435   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.370411   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:44:05.370688   45441 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:05.370707   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:44:05.370726   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:44:05.375681   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:44:05.375726   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.375758   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:44:05.375778   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.375938   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:44:05.376136   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:44:05.376324   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:44:03.263112   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:05.264610   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:05.536173   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 20:44:05.547763   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:44:05.547783   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:44:05.561439   45441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:05.589801   45441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:05.619036   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:44:05.619063   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:44:05.672972   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:05.672993   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:44:05.753214   45441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:05.861799   45441 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-877742" context rescaled to 1 replicas
	I0130 20:44:05.861852   45441 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.52 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:44:05.863602   45441 out.go:177] * Verifying Kubernetes components...
	I0130 20:44:05.864716   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:07.418910   45441 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.882691784s)
	I0130 20:44:07.418945   45441 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0130 20:44:07.960063   45441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.370223433s)
	I0130 20:44:07.960120   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.960161   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.960158   45441 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.095417539s)
	I0130 20:44:07.960143   45441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.206889959s)
	I0130 20:44:07.960223   45441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.398756648s)
	I0130 20:44:07.960234   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.960247   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.960190   45441 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-877742" to be "Ready" ...
	I0130 20:44:07.960251   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.960319   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.961892   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.961892   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.961902   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.961919   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.961921   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.961902   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.961934   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.961936   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.961941   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.961944   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.961950   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.961955   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.961970   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.961980   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.961990   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.962309   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.962319   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.962340   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.962348   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.962350   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.962357   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.962380   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.962380   45441 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-877742"
	I0130 20:44:07.962420   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.962439   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.979672   45441 node_ready.go:49] node "default-k8s-diff-port-877742" has status "Ready":"True"
	I0130 20:44:07.979700   45441 node_ready.go:38] duration metric: took 19.437813ms waiting for node "default-k8s-diff-port-877742" to be "Ready" ...
	I0130 20:44:07.979713   45441 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:08.005989   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:08.006020   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:08.006266   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:08.006287   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:08.006286   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:08.008091   45441 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0130 20:44:05.329467   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:07.826212   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:08.009918   45441 addons.go:505] enable addons completed in 2.713172208s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0130 20:44:08.032478   45441 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tlb8h" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.539497   45441 pod_ready.go:92] pod "coredns-5dd5756b68-tlb8h" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.539527   45441 pod_ready.go:81] duration metric: took 1.50701275s waiting for pod "coredns-5dd5756b68-tlb8h" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.539537   45441 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.545068   45441 pod_ready.go:92] pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.545090   45441 pod_ready.go:81] duration metric: took 5.546681ms waiting for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.545099   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.550794   45441 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.550817   45441 pod_ready.go:81] duration metric: took 5.711144ms waiting for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.550829   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.556050   45441 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.556068   45441 pod_ready.go:81] duration metric: took 5.232882ms waiting for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.556076   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-59zvd" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.562849   45441 pod_ready.go:92] pod "kube-proxy-59zvd" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.562866   45441 pod_ready.go:81] duration metric: took 6.784197ms waiting for pod "kube-proxy-59zvd" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.562874   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.965815   45441 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.965846   45441 pod_ready.go:81] duration metric: took 402.96387ms waiting for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.965860   45441 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:07.265985   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:09.765494   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:10.326063   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:12.825921   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:11.974724   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:14.473879   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:12.265674   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:14.765546   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:15.325945   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:17.326041   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:16.974143   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:19.473552   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:16.765691   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:18.766995   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:19.824366   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:21.824919   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:24.318779   45819 pod_ready.go:81] duration metric: took 4m0.000598437s waiting for pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace to be "Ready" ...
	E0130 20:44:24.318808   45819 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 20:44:24.318829   45819 pod_ready.go:38] duration metric: took 4m1.194970045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:24.318872   45819 kubeadm.go:640] restartCluster took 5m9.285235807s
	W0130 20:44:24.318943   45819 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 20:44:24.318974   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 20:44:21.973193   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:23.974160   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:21.263429   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:23.263586   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:23.263609   44923 pod_ready.go:81] duration metric: took 4m0.006890289s waiting for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	E0130 20:44:23.263618   44923 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 20:44:23.263625   44923 pod_ready.go:38] duration metric: took 4m4.564565945s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:23.263637   44923 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:44:23.263671   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:44:23.263711   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:44:23.319983   44923 cri.go:89] found id: "ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:23.320013   44923 cri.go:89] found id: ""
	I0130 20:44:23.320023   44923 logs.go:276] 1 containers: [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e]
	I0130 20:44:23.320078   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.325174   44923 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:44:23.325239   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:44:23.375914   44923 cri.go:89] found id: "b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:23.375952   44923 cri.go:89] found id: ""
	I0130 20:44:23.375960   44923 logs.go:276] 1 containers: [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901]
	I0130 20:44:23.376003   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.380265   44923 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:44:23.380324   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:44:23.428507   44923 cri.go:89] found id: "3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:23.428534   44923 cri.go:89] found id: ""
	I0130 20:44:23.428544   44923 logs.go:276] 1 containers: [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c]
	I0130 20:44:23.428591   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.434113   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:44:23.434184   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:44:23.522888   44923 cri.go:89] found id: "39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:23.522915   44923 cri.go:89] found id: ""
	I0130 20:44:23.522922   44923 logs.go:276] 1 containers: [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79]
	I0130 20:44:23.522964   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.534952   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:44:23.535015   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:44:23.576102   44923 cri.go:89] found id: "880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:23.576129   44923 cri.go:89] found id: ""
	I0130 20:44:23.576138   44923 logs.go:276] 1 containers: [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689]
	I0130 20:44:23.576185   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.580463   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:44:23.580527   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:44:23.620990   44923 cri.go:89] found id: "10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:23.621011   44923 cri.go:89] found id: ""
	I0130 20:44:23.621018   44923 logs.go:276] 1 containers: [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f]
	I0130 20:44:23.621069   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.625706   44923 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:44:23.625762   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:44:23.666341   44923 cri.go:89] found id: ""
	I0130 20:44:23.666368   44923 logs.go:276] 0 containers: []
	W0130 20:44:23.666378   44923 logs.go:278] No container was found matching "kindnet"
	I0130 20:44:23.666384   44923 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:44:23.666441   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:44:23.707229   44923 cri.go:89] found id: "e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:23.707248   44923 cri.go:89] found id: "748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:23.707252   44923 cri.go:89] found id: ""
	I0130 20:44:23.707258   44923 logs.go:276] 2 containers: [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446]
	I0130 20:44:23.707314   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.711242   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.715859   44923 logs.go:123] Gathering logs for kube-apiserver [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e] ...
	I0130 20:44:23.715883   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:23.775696   44923 logs.go:123] Gathering logs for storage-provisioner [748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446] ...
	I0130 20:44:23.775722   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:23.817767   44923 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:44:23.817796   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:44:24.301934   44923 logs.go:123] Gathering logs for container status ...
	I0130 20:44:24.301969   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:44:24.361236   44923 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:44:24.361265   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:44:24.511849   44923 logs.go:123] Gathering logs for kube-controller-manager [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f] ...
	I0130 20:44:24.511886   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:24.573648   44923 logs.go:123] Gathering logs for etcd [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901] ...
	I0130 20:44:24.573683   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:24.620572   44923 logs.go:123] Gathering logs for kubelet ...
	I0130 20:44:24.620608   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:44:24.687312   44923 logs.go:123] Gathering logs for dmesg ...
	I0130 20:44:24.687346   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:44:24.702224   44923 logs.go:123] Gathering logs for kube-proxy [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689] ...
	I0130 20:44:24.702262   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:24.749188   44923 logs.go:123] Gathering logs for storage-provisioner [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0] ...
	I0130 20:44:24.749218   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:24.793069   44923 logs.go:123] Gathering logs for coredns [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c] ...
	I0130 20:44:24.793093   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:24.829705   44923 logs.go:123] Gathering logs for kube-scheduler [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79] ...
	I0130 20:44:24.829730   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:29.263901   45819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.944900372s)
	I0130 20:44:29.263978   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:29.277198   45819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:44:29.286661   45819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:44:29.297088   45819 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:44:29.297129   45819 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0130 20:44:29.360347   45819 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0130 20:44:29.360446   45819 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 20:44:29.516880   45819 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 20:44:29.517075   45819 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 20:44:29.517217   45819 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 20:44:29.756175   45819 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 20:44:29.756323   45819 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 20:44:29.764820   45819 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0130 20:44:29.907654   45819 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 20:44:26.473595   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:28.473808   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:29.909307   45819 out.go:204]   - Generating certificates and keys ...
	I0130 20:44:29.909397   45819 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 20:44:29.909484   45819 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 20:44:29.909578   45819 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 20:44:29.909674   45819 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 20:44:29.909784   45819 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 20:44:29.909866   45819 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 20:44:29.909974   45819 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 20:44:29.910057   45819 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 20:44:29.910163   45819 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 20:44:29.910266   45819 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 20:44:29.910316   45819 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 20:44:29.910409   45819 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 20:44:29.974805   45819 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 20:44:30.281258   45819 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 20:44:30.605015   45819 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 20:44:30.782125   45819 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 20:44:30.783329   45819 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 20:44:27.369691   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:44:27.393279   44923 api_server.go:72] duration metric: took 4m16.430750077s to wait for apiserver process to appear ...
	I0130 20:44:27.393306   44923 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:44:27.393355   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:44:27.393434   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:44:27.443366   44923 cri.go:89] found id: "ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:27.443390   44923 cri.go:89] found id: ""
	I0130 20:44:27.443400   44923 logs.go:276] 1 containers: [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e]
	I0130 20:44:27.443457   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.448963   44923 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:44:27.449021   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:44:27.502318   44923 cri.go:89] found id: "b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:27.502341   44923 cri.go:89] found id: ""
	I0130 20:44:27.502348   44923 logs.go:276] 1 containers: [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901]
	I0130 20:44:27.502398   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.507295   44923 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:44:27.507352   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:44:27.548224   44923 cri.go:89] found id: "3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:27.548247   44923 cri.go:89] found id: ""
	I0130 20:44:27.548255   44923 logs.go:276] 1 containers: [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c]
	I0130 20:44:27.548299   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.552806   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:44:27.552864   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:44:27.608403   44923 cri.go:89] found id: "39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:27.608434   44923 cri.go:89] found id: ""
	I0130 20:44:27.608444   44923 logs.go:276] 1 containers: [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79]
	I0130 20:44:27.608523   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.613370   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:44:27.613435   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:44:27.668380   44923 cri.go:89] found id: "880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:27.668406   44923 cri.go:89] found id: ""
	I0130 20:44:27.668417   44923 logs.go:276] 1 containers: [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689]
	I0130 20:44:27.668470   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.673171   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:44:27.673231   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:44:27.720444   44923 cri.go:89] found id: "10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:27.720473   44923 cri.go:89] found id: ""
	I0130 20:44:27.720483   44923 logs.go:276] 1 containers: [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f]
	I0130 20:44:27.720546   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.725007   44923 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:44:27.725062   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:44:27.772186   44923 cri.go:89] found id: ""
	I0130 20:44:27.772214   44923 logs.go:276] 0 containers: []
	W0130 20:44:27.772224   44923 logs.go:278] No container was found matching "kindnet"
	I0130 20:44:27.772231   44923 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:44:27.772288   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:44:27.813222   44923 cri.go:89] found id: "e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:27.813259   44923 cri.go:89] found id: "748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:27.813268   44923 cri.go:89] found id: ""
	I0130 20:44:27.813286   44923 logs.go:276] 2 containers: [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446]
	I0130 20:44:27.813347   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.817565   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.821737   44923 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:44:27.821759   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:44:28.299900   44923 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:44:28.299933   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:44:28.441830   44923 logs.go:123] Gathering logs for storage-provisioner [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0] ...
	I0130 20:44:28.441866   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:28.485579   44923 logs.go:123] Gathering logs for dmesg ...
	I0130 20:44:28.485611   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:44:28.500668   44923 logs.go:123] Gathering logs for kube-controller-manager [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f] ...
	I0130 20:44:28.500691   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:28.558472   44923 logs.go:123] Gathering logs for storage-provisioner [748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446] ...
	I0130 20:44:28.558502   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:28.604655   44923 logs.go:123] Gathering logs for kubelet ...
	I0130 20:44:28.604687   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:44:28.670010   44923 logs.go:123] Gathering logs for kube-proxy [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689] ...
	I0130 20:44:28.670041   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:28.712222   44923 logs.go:123] Gathering logs for coredns [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c] ...
	I0130 20:44:28.712259   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:28.764243   44923 logs.go:123] Gathering logs for kube-scheduler [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79] ...
	I0130 20:44:28.764276   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:28.801930   44923 logs.go:123] Gathering logs for container status ...
	I0130 20:44:28.801956   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:44:28.848585   44923 logs.go:123] Gathering logs for kube-apiserver [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e] ...
	I0130 20:44:28.848612   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:28.902903   44923 logs.go:123] Gathering logs for etcd [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901] ...
	I0130 20:44:28.902936   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:30.785050   45819 out.go:204]   - Booting up control plane ...
	I0130 20:44:30.785155   45819 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 20:44:30.790853   45819 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 20:44:30.798657   45819 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 20:44:30.799425   45819 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 20:44:30.801711   45819 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 20:44:30.475584   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:32.973843   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:34.974144   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:31.454103   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:44:31.460009   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 200:
	ok
	I0130 20:44:31.461505   44923 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 20:44:31.461527   44923 api_server.go:131] duration metric: took 4.068214052s to wait for apiserver health ...
	I0130 20:44:31.461537   44923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:44:31.461563   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:44:31.461626   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:44:31.509850   44923 cri.go:89] found id: "ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:31.509874   44923 cri.go:89] found id: ""
	I0130 20:44:31.509884   44923 logs.go:276] 1 containers: [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e]
	I0130 20:44:31.509941   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.514078   44923 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:44:31.514136   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:44:31.555581   44923 cri.go:89] found id: "b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:31.555605   44923 cri.go:89] found id: ""
	I0130 20:44:31.555613   44923 logs.go:276] 1 containers: [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901]
	I0130 20:44:31.555674   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.559888   44923 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:44:31.559948   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:44:31.620256   44923 cri.go:89] found id: "3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:31.620285   44923 cri.go:89] found id: ""
	I0130 20:44:31.620295   44923 logs.go:276] 1 containers: [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c]
	I0130 20:44:31.620352   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.626003   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:44:31.626064   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:44:31.662862   44923 cri.go:89] found id: "39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:31.662889   44923 cri.go:89] found id: ""
	I0130 20:44:31.662899   44923 logs.go:276] 1 containers: [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79]
	I0130 20:44:31.662972   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.668242   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:44:31.668306   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:44:31.717065   44923 cri.go:89] found id: "880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:31.717089   44923 cri.go:89] found id: ""
	I0130 20:44:31.717098   44923 logs.go:276] 1 containers: [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689]
	I0130 20:44:31.717160   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.722195   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:44:31.722250   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:44:31.779789   44923 cri.go:89] found id: "10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:31.779812   44923 cri.go:89] found id: ""
	I0130 20:44:31.779821   44923 logs.go:276] 1 containers: [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f]
	I0130 20:44:31.779894   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.784710   44923 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:44:31.784776   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:44:31.826045   44923 cri.go:89] found id: ""
	I0130 20:44:31.826073   44923 logs.go:276] 0 containers: []
	W0130 20:44:31.826082   44923 logs.go:278] No container was found matching "kindnet"
	I0130 20:44:31.826087   44923 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:44:31.826131   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:44:31.868212   44923 cri.go:89] found id: "e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:31.868236   44923 cri.go:89] found id: "748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:31.868243   44923 cri.go:89] found id: ""
	I0130 20:44:31.868253   44923 logs.go:276] 2 containers: [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446]
	I0130 20:44:31.868314   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.873019   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.877432   44923 logs.go:123] Gathering logs for storage-provisioner [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0] ...
	I0130 20:44:31.877456   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:31.915888   44923 logs.go:123] Gathering logs for storage-provisioner [748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446] ...
	I0130 20:44:31.915915   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:31.972950   44923 logs.go:123] Gathering logs for kubelet ...
	I0130 20:44:31.972978   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:44:32.028993   44923 logs.go:123] Gathering logs for dmesg ...
	I0130 20:44:32.029028   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:44:32.046602   44923 logs.go:123] Gathering logs for etcd [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901] ...
	I0130 20:44:32.046633   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:32.094088   44923 logs.go:123] Gathering logs for kube-proxy [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689] ...
	I0130 20:44:32.094123   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:32.138616   44923 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:44:32.138645   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:44:32.526995   44923 logs.go:123] Gathering logs for container status ...
	I0130 20:44:32.527033   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:44:32.591970   44923 logs.go:123] Gathering logs for kube-apiserver [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e] ...
	I0130 20:44:32.592003   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:32.655438   44923 logs.go:123] Gathering logs for coredns [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c] ...
	I0130 20:44:32.655466   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:32.707131   44923 logs.go:123] Gathering logs for kube-scheduler [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79] ...
	I0130 20:44:32.707163   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:32.749581   44923 logs.go:123] Gathering logs for kube-controller-manager [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f] ...
	I0130 20:44:32.749610   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:32.815778   44923 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:44:32.815805   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:44:35.448121   44923 system_pods.go:59] 8 kube-system pods found
	I0130 20:44:35.448155   44923 system_pods.go:61] "coredns-76f75df574-d4c7t" [a8701b4d-0616-4c05-9ba0-0157adae2d13] Running
	I0130 20:44:35.448162   44923 system_pods.go:61] "etcd-no-preload-473743" [ed931ab3-95d8-4115-ae97-1c274ed8432d] Running
	I0130 20:44:35.448169   44923 system_pods.go:61] "kube-apiserver-no-preload-473743" [64b9b17c-6df5-41db-a308-b0deba016c9d] Running
	I0130 20:44:35.448175   44923 system_pods.go:61] "kube-controller-manager-no-preload-473743" [a28d8dc6-244a-4dfa-9d7f-468281823332] Running
	I0130 20:44:35.448181   44923 system_pods.go:61] "kube-proxy-zklzt" [fa94d19c-b0d6-4e78-86e8-e6b5f3608753] Running
	I0130 20:44:35.448188   44923 system_pods.go:61] "kube-scheduler-no-preload-473743" [b8f8066b-8644-42c3-b47a-52e34210e410] Running
	I0130 20:44:35.448198   44923 system_pods.go:61] "metrics-server-57f55c9bc5-wzb2g" [cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:44:35.448210   44923 system_pods.go:61] "storage-provisioner" [a257b079-cb6e-45fd-b05d-9ad6fa26225e] Running
	I0130 20:44:35.448221   44923 system_pods.go:74] duration metric: took 3.986678023s to wait for pod list to return data ...
	I0130 20:44:35.448227   44923 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:44:35.451377   44923 default_sa.go:45] found service account: "default"
	I0130 20:44:35.451397   44923 default_sa.go:55] duration metric: took 3.162882ms for default service account to be created ...
	I0130 20:44:35.451404   44923 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:44:35.457941   44923 system_pods.go:86] 8 kube-system pods found
	I0130 20:44:35.457962   44923 system_pods.go:89] "coredns-76f75df574-d4c7t" [a8701b4d-0616-4c05-9ba0-0157adae2d13] Running
	I0130 20:44:35.457969   44923 system_pods.go:89] "etcd-no-preload-473743" [ed931ab3-95d8-4115-ae97-1c274ed8432d] Running
	I0130 20:44:35.457976   44923 system_pods.go:89] "kube-apiserver-no-preload-473743" [64b9b17c-6df5-41db-a308-b0deba016c9d] Running
	I0130 20:44:35.457983   44923 system_pods.go:89] "kube-controller-manager-no-preload-473743" [a28d8dc6-244a-4dfa-9d7f-468281823332] Running
	I0130 20:44:35.457992   44923 system_pods.go:89] "kube-proxy-zklzt" [fa94d19c-b0d6-4e78-86e8-e6b5f3608753] Running
	I0130 20:44:35.457999   44923 system_pods.go:89] "kube-scheduler-no-preload-473743" [b8f8066b-8644-42c3-b47a-52e34210e410] Running
	I0130 20:44:35.458013   44923 system_pods.go:89] "metrics-server-57f55c9bc5-wzb2g" [cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:44:35.458023   44923 system_pods.go:89] "storage-provisioner" [a257b079-cb6e-45fd-b05d-9ad6fa26225e] Running
	I0130 20:44:35.458032   44923 system_pods.go:126] duration metric: took 6.622973ms to wait for k8s-apps to be running ...
	I0130 20:44:35.458040   44923 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:44:35.458085   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:35.478158   44923 system_svc.go:56] duration metric: took 20.107762ms WaitForService to wait for kubelet.
	I0130 20:44:35.478182   44923 kubeadm.go:581] duration metric: took 4m24.515659177s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:44:35.478205   44923 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:44:35.481624   44923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:44:35.481649   44923 node_conditions.go:123] node cpu capacity is 2
	I0130 20:44:35.481661   44923 node_conditions.go:105] duration metric: took 3.450762ms to run NodePressure ...
	I0130 20:44:35.481674   44923 start.go:228] waiting for startup goroutines ...
	I0130 20:44:35.481682   44923 start.go:233] waiting for cluster config update ...
	I0130 20:44:35.481695   44923 start.go:242] writing updated cluster config ...
	I0130 20:44:35.481966   44923 ssh_runner.go:195] Run: rm -f paused
	I0130 20:44:35.534192   44923 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0130 20:44:35.537286   44923 out.go:177] * Done! kubectl is now configured to use "no-preload-473743" cluster and "default" namespace by default
	I0130 20:44:36.975176   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:39.472594   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:40.808532   45819 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.005048 seconds
	I0130 20:44:40.808703   45819 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 20:44:40.821445   45819 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 20:44:41.350196   45819 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 20:44:41.350372   45819 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-150971 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0130 20:44:41.859169   45819 kubeadm.go:322] [bootstrap-token] Using token: vlkrdr.8ubylscclgt88ll2
	I0130 20:44:41.862311   45819 out.go:204]   - Configuring RBAC rules ...
	I0130 20:44:41.862450   45819 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 20:44:41.870072   45819 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 20:44:41.874429   45819 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 20:44:41.883936   45819 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 20:44:41.887738   45819 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 20:44:41.963361   45819 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 20:44:42.299030   45819 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 20:44:42.300623   45819 kubeadm.go:322] 
	I0130 20:44:42.300708   45819 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 20:44:42.300721   45819 kubeadm.go:322] 
	I0130 20:44:42.300820   45819 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 20:44:42.300845   45819 kubeadm.go:322] 
	I0130 20:44:42.300886   45819 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 20:44:42.300975   45819 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 20:44:42.301048   45819 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 20:44:42.301061   45819 kubeadm.go:322] 
	I0130 20:44:42.301126   45819 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 20:44:42.301241   45819 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 20:44:42.301309   45819 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 20:44:42.301326   45819 kubeadm.go:322] 
	I0130 20:44:42.301417   45819 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0130 20:44:42.301482   45819 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 20:44:42.301488   45819 kubeadm.go:322] 
	I0130 20:44:42.301554   45819 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vlkrdr.8ubylscclgt88ll2 \
	I0130 20:44:42.301684   45819 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 \
	I0130 20:44:42.301717   45819 kubeadm.go:322]     --control-plane 	  
	I0130 20:44:42.301726   45819 kubeadm.go:322] 
	I0130 20:44:42.301827   45819 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 20:44:42.301844   45819 kubeadm.go:322] 
	I0130 20:44:42.301984   45819 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vlkrdr.8ubylscclgt88ll2 \
	I0130 20:44:42.302116   45819 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 
	I0130 20:44:42.302689   45819 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 20:44:42.302726   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:44:42.302739   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:44:42.305197   45819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:44:42.306389   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:44:42.357619   45819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:44:42.381081   45819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:44:42.381189   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:42.381196   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218 minikube.k8s.io/name=old-k8s-version-150971 minikube.k8s.io/updated_at=2024_01_30T20_44_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:42.406368   45819 ops.go:34] apiserver oom_adj: -16
	I0130 20:44:42.639356   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:43.139439   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:43.640260   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:44.140080   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:44.639587   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:41.473598   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:43.474059   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:45.140354   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:45.640062   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:46.140282   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:46.639400   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:47.140308   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:47.640045   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:48.139406   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:48.640423   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:49.139702   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:49.640036   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:45.973530   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:47.974364   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:49.974551   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:50.139435   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:50.639471   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:51.140088   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:51.639444   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:52.139401   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:52.639731   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:53.140050   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:53.639411   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:54.139942   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:54.640279   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:52.473624   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:54.474924   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:55.139610   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:55.639431   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:56.140267   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:56.639444   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:57.140068   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:57.296527   45819 kubeadm.go:1088] duration metric: took 14.915402679s to wait for elevateKubeSystemPrivileges.
	I0130 20:44:57.296567   45819 kubeadm.go:406] StartCluster complete in 5m42.316503122s
	I0130 20:44:57.296588   45819 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:57.296672   45819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:44:57.298762   45819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:57.299005   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:44:57.299123   45819 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:44:57.299208   45819 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-150971"
	I0130 20:44:57.299220   45819 config.go:182] Loaded profile config "old-k8s-version-150971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 20:44:57.299229   45819 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-150971"
	W0130 20:44:57.299241   45819 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:44:57.299220   45819 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-150971"
	I0130 20:44:57.299300   45819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-150971"
	I0130 20:44:57.299315   45819 host.go:66] Checking if "old-k8s-version-150971" exists ...
	I0130 20:44:57.299247   45819 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-150971"
	I0130 20:44:57.299387   45819 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-150971"
	W0130 20:44:57.299397   45819 addons.go:243] addon metrics-server should already be in state true
	I0130 20:44:57.299433   45819 host.go:66] Checking if "old-k8s-version-150971" exists ...
	I0130 20:44:57.299705   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.299726   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.299756   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.299760   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.299796   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.299897   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.319159   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38823
	I0130 20:44:57.319202   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45589
	I0130 20:44:57.319167   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34823
	I0130 20:44:57.319578   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.319707   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.319771   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.320071   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.320103   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.320242   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.320261   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.320408   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.320423   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.320586   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.320630   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.321140   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.321158   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.321591   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.321624   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.321675   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.321705   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.325091   45819 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-150971"
	W0130 20:44:57.325106   45819 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:44:57.325125   45819 host.go:66] Checking if "old-k8s-version-150971" exists ...
	I0130 20:44:57.325420   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.325442   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.342652   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
	I0130 20:44:57.342787   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41961
	I0130 20:44:57.343203   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.343303   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.343745   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.343779   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.343848   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44027
	I0130 20:44:57.343887   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.343903   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.344220   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.344244   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.344220   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.344493   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.344494   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.344707   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.344730   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.345083   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.346139   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.346172   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.346830   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:44:57.346891   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:44:57.348974   45819 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:44:57.350330   45819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:44:57.350364   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:44:57.351707   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:44:57.351729   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:44:57.351684   45819 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:57.351795   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:44:57.351821   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:44:57.356145   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.356428   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.356595   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:44:57.356621   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.356767   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:44:57.357040   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:44:57.357095   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:44:57.357123   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.357218   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:44:57.357266   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:44:57.357458   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:44:57.357451   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:44:57.357617   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:44:57.357754   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:44:57.362806   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I0130 20:44:57.363167   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.363742   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.363770   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.364074   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.364280   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.365877   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:44:57.366086   45819 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:57.366096   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:44:57.366107   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:44:57.369237   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.369890   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:44:57.369930   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:44:57.369968   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.370351   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:44:57.370563   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:44:57.370712   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:44:57.509329   45819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:57.535146   45819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:57.536528   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 20:44:57.559042   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:44:57.559066   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:44:57.643054   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:44:57.643081   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:44:57.773561   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:57.773588   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:44:57.848668   45819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:57.910205   45819 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-150971" context rescaled to 1 replicas
	I0130 20:44:57.910247   45819 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:44:57.912390   45819 out.go:177] * Verifying Kubernetes components...
	I0130 20:44:57.913764   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:58.721986   45819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.186811658s)
	I0130 20:44:58.722033   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722045   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722145   45819 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.185575635s)
	I0130 20:44:58.722210   45819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.212845439s)
	I0130 20:44:58.722213   45819 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0130 20:44:58.722254   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722271   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722347   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.722359   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.722371   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.722381   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722391   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722537   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.722576   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.722593   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.722611   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722621   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722659   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.722675   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.724251   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.724291   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.724304   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.798383   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.798410   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.798745   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.798767   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.798816   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:59.125243   45819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.276531373s)
	I0130 20:44:59.125305   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:59.125322   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:59.125256   45819 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.211465342s)
	I0130 20:44:59.125360   45819 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-150971" to be "Ready" ...
	I0130 20:44:59.125612   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:59.125639   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:59.125650   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:59.125650   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:59.125659   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:59.125902   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:59.125953   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:59.125963   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:59.125972   45819 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-150971"
	I0130 20:44:59.127634   45819 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 20:44:59.129415   45819 addons.go:505] enable addons completed in 1.830294624s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 20:44:59.141691   45819 node_ready.go:49] node "old-k8s-version-150971" has status "Ready":"True"
	I0130 20:44:59.141715   45819 node_ready.go:38] duration metric: took 16.331635ms waiting for node "old-k8s-version-150971" to be "Ready" ...
	I0130 20:44:59.141725   45819 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:59.146645   45819 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-7qhmc" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:56.475086   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:58.973370   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:00.161718   45819 pod_ready.go:92] pod "coredns-5644d7b6d9-7qhmc" in "kube-system" namespace has status "Ready":"True"
	I0130 20:45:00.161741   45819 pod_ready.go:81] duration metric: took 1.015069343s waiting for pod "coredns-5644d7b6d9-7qhmc" in "kube-system" namespace to be "Ready" ...
	I0130 20:45:00.161754   45819 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zbdxm" in "kube-system" namespace to be "Ready" ...
	I0130 20:45:00.668280   45819 pod_ready.go:92] pod "kube-proxy-zbdxm" in "kube-system" namespace has status "Ready":"True"
	I0130 20:45:00.668313   45819 pod_ready.go:81] duration metric: took 506.550797ms waiting for pod "kube-proxy-zbdxm" in "kube-system" namespace to be "Ready" ...
	I0130 20:45:00.668328   45819 pod_ready.go:38] duration metric: took 1.526591158s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:45:00.668343   45819 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:45:00.668398   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:45:00.682119   45819 api_server.go:72] duration metric: took 2.771845703s to wait for apiserver process to appear ...
	I0130 20:45:00.682143   45819 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:45:00.682167   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:45:00.687603   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0130 20:45:00.688287   45819 api_server.go:141] control plane version: v1.16.0
	I0130 20:45:00.688302   45819 api_server.go:131] duration metric: took 6.153997ms to wait for apiserver health ...
	I0130 20:45:00.688309   45819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:45:00.691917   45819 system_pods.go:59] 4 kube-system pods found
	I0130 20:45:00.691936   45819 system_pods.go:61] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:00.691942   45819 system_pods.go:61] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:00.691948   45819 system_pods.go:61] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:00.691954   45819 system_pods.go:61] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:45:00.691962   45819 system_pods.go:74] duration metric: took 3.648521ms to wait for pod list to return data ...
	I0130 20:45:00.691970   45819 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:45:00.694229   45819 default_sa.go:45] found service account: "default"
	I0130 20:45:00.694250   45819 default_sa.go:55] duration metric: took 2.274248ms for default service account to be created ...
	I0130 20:45:00.694258   45819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:45:00.698156   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:00.698179   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:00.698187   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:00.698198   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:00.698210   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:45:00.698234   45819 retry.go:31] will retry after 277.03208ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:00.979637   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:00.979660   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:00.979665   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:00.979671   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:00.979677   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:45:00.979694   45819 retry.go:31] will retry after 341.469517ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:01.326631   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:01.326666   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:01.326674   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:01.326683   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:01.326689   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:01.326713   45819 retry.go:31] will retry after 487.104661ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:01.818702   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:01.818733   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:01.818742   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:01.818752   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:01.818759   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:01.818779   45819 retry.go:31] will retry after 574.423042ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:02.398901   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:02.398940   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:02.398949   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:02.398959   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:02.398966   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:02.398986   45819 retry.go:31] will retry after 741.538469ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:03.145137   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:03.145162   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:03.145168   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:03.145174   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:03.145179   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:03.145194   45819 retry.go:31] will retry after 742.915086ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:03.892722   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:03.892748   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:03.892753   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:03.892759   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:03.892764   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:03.892779   45819 retry.go:31] will retry after 786.727719ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:01.473056   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:03.473346   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:04.685933   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:04.685967   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:04.685976   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:04.685985   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:04.685993   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:04.686016   45819 retry.go:31] will retry after 1.232157955s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:05.923020   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:05.923045   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:05.923050   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:05.923056   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:05.923061   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:05.923076   45819 retry.go:31] will retry after 1.652424416s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:07.580982   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:07.581007   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:07.581013   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:07.581019   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:07.581026   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:07.581042   45819 retry.go:31] will retry after 1.774276151s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:09.360073   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:09.360098   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:09.360103   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:09.360110   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:09.360115   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:09.360133   45819 retry.go:31] will retry after 2.786181653s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:05.975152   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:07.975274   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:12.151191   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:12.151215   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:12.151221   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:12.151227   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:12.151232   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:12.151258   45819 retry.go:31] will retry after 3.456504284s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:10.472793   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:12.474310   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:14.977715   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:15.613679   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:15.613705   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:15.613711   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:15.613718   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:15.613722   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:15.613741   45819 retry.go:31] will retry after 4.434906632s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:17.472993   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:19.473530   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:20.053023   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:20.053050   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:20.053055   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:20.053062   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:20.053066   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:20.053082   45819 retry.go:31] will retry after 3.910644554s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:23.969998   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:23.970027   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:23.970035   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:23.970047   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:23.970053   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:23.970075   45819 retry.go:31] will retry after 4.907431581s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:21.473946   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:23.973965   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:28.881886   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:28.881911   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:28.881917   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:28.881924   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:28.881929   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:28.881956   45819 retry.go:31] will retry after 7.594967181s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:26.473519   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:28.474676   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:30.972445   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:32.973156   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:34.973590   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:36.482226   45819 system_pods.go:86] 5 kube-system pods found
	I0130 20:45:36.482255   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:36.482261   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:36.482267   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Pending
	I0130 20:45:36.482277   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:36.482284   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:36.482306   45819 retry.go:31] will retry after 8.875079493s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:36.974189   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:39.474803   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:41.973709   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:43.974130   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:45.361733   45819 system_pods.go:86] 5 kube-system pods found
	I0130 20:45:45.361760   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:45.361766   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:45.361772   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:45:45.361781   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:45.361789   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:45.361820   45819 retry.go:31] will retry after 9.918306048s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0130 20:45:45.976853   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:48.476619   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:50.974748   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:52.975900   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:55.285765   45819 system_pods.go:86] 6 kube-system pods found
	I0130 20:45:55.285793   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:55.285801   45819 system_pods.go:89] "kube-apiserver-old-k8s-version-150971" [14975616-ba41-4199-b0e3-179dc01def2d] Pending
	I0130 20:45:55.285807   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:55.285813   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:45:55.285822   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:55.285828   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:55.285849   45819 retry.go:31] will retry after 12.684125727s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0130 20:45:55.473705   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:57.973533   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:59.974108   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:02.473825   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:04.973953   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:07.975898   45819 system_pods.go:86] 8 kube-system pods found
	I0130 20:46:07.975923   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:46:07.975929   45819 system_pods.go:89] "etcd-old-k8s-version-150971" [21884345-e587-4bae-88b9-78e0bdacf954] Running
	I0130 20:46:07.975933   45819 system_pods.go:89] "kube-apiserver-old-k8s-version-150971" [14975616-ba41-4199-b0e3-179dc01def2d] Running
	I0130 20:46:07.975937   45819 system_pods.go:89] "kube-controller-manager-old-k8s-version-150971" [f0cfbd77-f00e-4d40-a301-f24f6ed937e1] Pending
	I0130 20:46:07.975941   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:46:07.975944   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:46:07.975951   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:46:07.975955   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:46:07.975969   45819 retry.go:31] will retry after 15.59894457s: missing components: kube-controller-manager
	I0130 20:46:07.472712   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:09.474175   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:11.478228   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:13.973190   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:16.473264   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:18.474418   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:23.581862   45819 system_pods.go:86] 8 kube-system pods found
	I0130 20:46:23.581890   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:46:23.581895   45819 system_pods.go:89] "etcd-old-k8s-version-150971" [21884345-e587-4bae-88b9-78e0bdacf954] Running
	I0130 20:46:23.581899   45819 system_pods.go:89] "kube-apiserver-old-k8s-version-150971" [14975616-ba41-4199-b0e3-179dc01def2d] Running
	I0130 20:46:23.581904   45819 system_pods.go:89] "kube-controller-manager-old-k8s-version-150971" [f0cfbd77-f00e-4d40-a301-f24f6ed937e1] Running
	I0130 20:46:23.581907   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:46:23.581911   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:46:23.581918   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:46:23.581923   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:46:23.581932   45819 system_pods.go:126] duration metric: took 1m22.887668504s to wait for k8s-apps to be running ...
	I0130 20:46:23.581939   45819 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:46:23.581986   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:46:23.604099   45819 system_svc.go:56] duration metric: took 22.14886ms WaitForService to wait for kubelet.
	I0130 20:46:23.604134   45819 kubeadm.go:581] duration metric: took 1m25.693865663s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:46:23.604159   45819 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:46:23.607539   45819 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:46:23.607567   45819 node_conditions.go:123] node cpu capacity is 2
	I0130 20:46:23.607580   45819 node_conditions.go:105] duration metric: took 3.415829ms to run NodePressure ...
	I0130 20:46:23.607594   45819 start.go:228] waiting for startup goroutines ...
	I0130 20:46:23.607602   45819 start.go:233] waiting for cluster config update ...
	I0130 20:46:23.607615   45819 start.go:242] writing updated cluster config ...
	I0130 20:46:23.607933   45819 ssh_runner.go:195] Run: rm -f paused
	I0130 20:46:23.658357   45819 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0130 20:46:23.660375   45819 out.go:177] 
	W0130 20:46:23.661789   45819 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0130 20:46:23.663112   45819 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0130 20:46:23.664623   45819 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-150971" cluster and "default" namespace by default
	I0130 20:46:20.474791   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:22.973143   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:24.974320   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:27.474508   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:29.973471   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:31.973727   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:33.974180   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:36.472928   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:38.474336   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:40.973509   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:42.973942   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:45.473120   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:47.972943   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:49.973756   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:51.973913   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:54.472597   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:56.473076   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:58.974262   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:01.476906   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:03.974275   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:06.474453   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:08.973144   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:10.973407   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:12.974842   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:15.473765   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:17.474938   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:19.973849   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:21.974660   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:23.977144   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:26.479595   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:28.975572   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:31.473715   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:33.974243   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:36.472321   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:38.473133   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:40.973786   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:43.473691   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:45.476882   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:47.975923   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:50.474045   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:52.474411   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:54.474531   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:56.973542   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:58.974226   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:00.975045   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:03.473440   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:05.473667   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:07.973417   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:09.978199   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:09.978230   45441 pod_ready.go:81] duration metric: took 4m0.012361166s waiting for pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace to be "Ready" ...
	E0130 20:48:09.978243   45441 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 20:48:09.978253   45441 pod_ready.go:38] duration metric: took 4m1.998529694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:48:09.978276   45441 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:48:09.978323   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:48:09.978403   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:48:10.038921   45441 cri.go:89] found id: "39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:10.038949   45441 cri.go:89] found id: ""
	I0130 20:48:10.038958   45441 logs.go:276] 1 containers: [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481]
	I0130 20:48:10.039017   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.043851   45441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:48:10.043902   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:48:10.088920   45441 cri.go:89] found id: "1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:10.088945   45441 cri.go:89] found id: ""
	I0130 20:48:10.088952   45441 logs.go:276] 1 containers: [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15]
	I0130 20:48:10.089001   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.094186   45441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:48:10.094267   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:48:10.143350   45441 cri.go:89] found id: "215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:10.143380   45441 cri.go:89] found id: ""
	I0130 20:48:10.143390   45441 logs.go:276] 1 containers: [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb]
	I0130 20:48:10.143450   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.148357   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:48:10.148426   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:48:10.187812   45441 cri.go:89] found id: "8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:10.187848   45441 cri.go:89] found id: ""
	I0130 20:48:10.187858   45441 logs.go:276] 1 containers: [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7]
	I0130 20:48:10.187914   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.192049   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:48:10.192109   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:48:10.241052   45441 cri.go:89] found id: "c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:10.241079   45441 cri.go:89] found id: ""
	I0130 20:48:10.241088   45441 logs.go:276] 1 containers: [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe]
	I0130 20:48:10.241139   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.245711   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:48:10.245763   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:48:10.287115   45441 cri.go:89] found id: "1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:10.287139   45441 cri.go:89] found id: ""
	I0130 20:48:10.287148   45441 logs.go:276] 1 containers: [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed]
	I0130 20:48:10.287194   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.291627   45441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:48:10.291697   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:48:10.341321   45441 cri.go:89] found id: ""
	I0130 20:48:10.341346   45441 logs.go:276] 0 containers: []
	W0130 20:48:10.341356   45441 logs.go:278] No container was found matching "kindnet"
	I0130 20:48:10.341362   45441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:48:10.341420   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:48:10.385515   45441 cri.go:89] found id: "f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:10.385543   45441 cri.go:89] found id: ""
	I0130 20:48:10.385552   45441 logs.go:276] 1 containers: [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06]
	I0130 20:48:10.385601   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.390397   45441 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:48:10.390433   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:48:10.832689   45441 logs.go:123] Gathering logs for dmesg ...
	I0130 20:48:10.832724   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:48:10.846560   45441 logs.go:123] Gathering logs for storage-provisioner [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06] ...
	I0130 20:48:10.846587   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:10.887801   45441 logs.go:123] Gathering logs for kube-apiserver [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481] ...
	I0130 20:48:10.887826   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:10.942977   45441 logs.go:123] Gathering logs for etcd [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15] ...
	I0130 20:48:10.943003   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:10.987642   45441 logs.go:123] Gathering logs for coredns [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb] ...
	I0130 20:48:10.987669   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:11.024934   45441 logs.go:123] Gathering logs for kube-scheduler [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7] ...
	I0130 20:48:11.024964   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:11.076336   45441 logs.go:123] Gathering logs for kube-proxy [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe] ...
	I0130 20:48:11.076373   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:11.127315   45441 logs.go:123] Gathering logs for kube-controller-manager [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed] ...
	I0130 20:48:11.127344   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:11.182944   45441 logs.go:123] Gathering logs for kubelet ...
	I0130 20:48:11.182974   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:48:11.276494   45441 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:48:11.276525   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:48:11.413186   45441 logs.go:123] Gathering logs for container status ...
	I0130 20:48:11.413213   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:48:13.960537   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:48:13.977332   45441 api_server.go:72] duration metric: took 4m8.11544723s to wait for apiserver process to appear ...
	I0130 20:48:13.977362   45441 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:48:13.977400   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:48:13.977466   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:48:14.025510   45441 cri.go:89] found id: "39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:14.025534   45441 cri.go:89] found id: ""
	I0130 20:48:14.025542   45441 logs.go:276] 1 containers: [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481]
	I0130 20:48:14.025593   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.030025   45441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:48:14.030103   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:48:14.070504   45441 cri.go:89] found id: "1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:14.070524   45441 cri.go:89] found id: ""
	I0130 20:48:14.070531   45441 logs.go:276] 1 containers: [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15]
	I0130 20:48:14.070577   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.074858   45441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:48:14.074928   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:48:14.110816   45441 cri.go:89] found id: "215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:14.110844   45441 cri.go:89] found id: ""
	I0130 20:48:14.110853   45441 logs.go:276] 1 containers: [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb]
	I0130 20:48:14.110912   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.114997   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:48:14.115079   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:48:14.169213   45441 cri.go:89] found id: "8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:14.169240   45441 cri.go:89] found id: ""
	I0130 20:48:14.169249   45441 logs.go:276] 1 containers: [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7]
	I0130 20:48:14.169305   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.173541   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:48:14.173607   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:48:14.210634   45441 cri.go:89] found id: "c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:14.210657   45441 cri.go:89] found id: ""
	I0130 20:48:14.210664   45441 logs.go:276] 1 containers: [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe]
	I0130 20:48:14.210717   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.215015   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:48:14.215074   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:48:14.258454   45441 cri.go:89] found id: "1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:14.258477   45441 cri.go:89] found id: ""
	I0130 20:48:14.258484   45441 logs.go:276] 1 containers: [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed]
	I0130 20:48:14.258532   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.262486   45441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:48:14.262537   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:48:14.302175   45441 cri.go:89] found id: ""
	I0130 20:48:14.302205   45441 logs.go:276] 0 containers: []
	W0130 20:48:14.302213   45441 logs.go:278] No container was found matching "kindnet"
	I0130 20:48:14.302218   45441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:48:14.302262   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:48:14.339497   45441 cri.go:89] found id: "f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:14.339523   45441 cri.go:89] found id: ""
	I0130 20:48:14.339533   45441 logs.go:276] 1 containers: [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06]
	I0130 20:48:14.339589   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.343954   45441 logs.go:123] Gathering logs for kube-apiserver [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481] ...
	I0130 20:48:14.343983   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:14.391168   45441 logs.go:123] Gathering logs for coredns [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb] ...
	I0130 20:48:14.391203   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:14.436713   45441 logs.go:123] Gathering logs for kube-proxy [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe] ...
	I0130 20:48:14.436743   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:14.473899   45441 logs.go:123] Gathering logs for kube-controller-manager [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed] ...
	I0130 20:48:14.473934   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:14.533733   45441 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:48:14.533763   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:48:14.924087   45441 logs.go:123] Gathering logs for container status ...
	I0130 20:48:14.924121   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:48:14.972652   45441 logs.go:123] Gathering logs for kubelet ...
	I0130 20:48:14.972684   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:48:15.074398   45441 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:48:15.074443   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:48:15.206993   45441 logs.go:123] Gathering logs for kube-scheduler [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7] ...
	I0130 20:48:15.207026   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:15.258807   45441 logs.go:123] Gathering logs for storage-provisioner [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06] ...
	I0130 20:48:15.258841   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:15.299162   45441 logs.go:123] Gathering logs for dmesg ...
	I0130 20:48:15.299209   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:48:15.315611   45441 logs.go:123] Gathering logs for etcd [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15] ...
	I0130 20:48:15.315643   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:17.859914   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:48:17.865483   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 200:
	ok
	I0130 20:48:17.866876   45441 api_server.go:141] control plane version: v1.28.4
	I0130 20:48:17.866899   45441 api_server.go:131] duration metric: took 3.889528289s to wait for apiserver health ...
	I0130 20:48:17.866910   45441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:48:17.866937   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:48:17.866992   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:48:17.907357   45441 cri.go:89] found id: "39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:17.907386   45441 cri.go:89] found id: ""
	I0130 20:48:17.907396   45441 logs.go:276] 1 containers: [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481]
	I0130 20:48:17.907461   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:17.911558   45441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:48:17.911617   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:48:17.948725   45441 cri.go:89] found id: "1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:17.948747   45441 cri.go:89] found id: ""
	I0130 20:48:17.948757   45441 logs.go:276] 1 containers: [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15]
	I0130 20:48:17.948819   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:17.953304   45441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:48:17.953365   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:48:17.994059   45441 cri.go:89] found id: "215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:17.994091   45441 cri.go:89] found id: ""
	I0130 20:48:17.994101   45441 logs.go:276] 1 containers: [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb]
	I0130 20:48:17.994158   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:17.998347   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:48:17.998402   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:48:18.047814   45441 cri.go:89] found id: "8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:18.047842   45441 cri.go:89] found id: ""
	I0130 20:48:18.047853   45441 logs.go:276] 1 containers: [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7]
	I0130 20:48:18.047914   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.052864   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:48:18.052927   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:48:18.091597   45441 cri.go:89] found id: "c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:18.091617   45441 cri.go:89] found id: ""
	I0130 20:48:18.091625   45441 logs.go:276] 1 containers: [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe]
	I0130 20:48:18.091680   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.095921   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:48:18.096034   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:48:18.146922   45441 cri.go:89] found id: "1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:18.146942   45441 cri.go:89] found id: ""
	I0130 20:48:18.146952   45441 logs.go:276] 1 containers: [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed]
	I0130 20:48:18.147002   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.156610   45441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:48:18.156671   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:48:18.209680   45441 cri.go:89] found id: ""
	I0130 20:48:18.209701   45441 logs.go:276] 0 containers: []
	W0130 20:48:18.209711   45441 logs.go:278] No container was found matching "kindnet"
	I0130 20:48:18.209716   45441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:48:18.209761   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:48:18.253810   45441 cri.go:89] found id: "f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:18.253834   45441 cri.go:89] found id: ""
	I0130 20:48:18.253841   45441 logs.go:276] 1 containers: [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06]
	I0130 20:48:18.253883   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.258404   45441 logs.go:123] Gathering logs for storage-provisioner [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06] ...
	I0130 20:48:18.258433   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:18.305088   45441 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:48:18.305117   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:48:18.629911   45441 logs.go:123] Gathering logs for container status ...
	I0130 20:48:18.629948   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:48:18.677758   45441 logs.go:123] Gathering logs for kubelet ...
	I0130 20:48:18.677787   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:48:18.779831   45441 logs.go:123] Gathering logs for dmesg ...
	I0130 20:48:18.779869   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:48:18.795995   45441 logs.go:123] Gathering logs for kube-apiserver [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481] ...
	I0130 20:48:18.796024   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:18.844003   45441 logs.go:123] Gathering logs for coredns [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb] ...
	I0130 20:48:18.844034   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:18.884617   45441 logs.go:123] Gathering logs for kube-scheduler [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7] ...
	I0130 20:48:18.884645   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:18.931556   45441 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:48:18.931591   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:48:19.066569   45441 logs.go:123] Gathering logs for etcd [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15] ...
	I0130 20:48:19.066606   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:19.115012   45441 logs.go:123] Gathering logs for kube-proxy [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe] ...
	I0130 20:48:19.115041   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:19.169107   45441 logs.go:123] Gathering logs for kube-controller-manager [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed] ...
	I0130 20:48:19.169137   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:21.731792   45441 system_pods.go:59] 8 kube-system pods found
	I0130 20:48:21.731816   45441 system_pods.go:61] "coredns-5dd5756b68-tlb8h" [547c1fe4-3ef7-421a-b460-660a05caa2ab] Running
	I0130 20:48:21.731821   45441 system_pods.go:61] "etcd-default-k8s-diff-port-877742" [a8ff44ad-5fec-415b-a574-75bce55acf8e] Running
	I0130 20:48:21.731826   45441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-877742" [b183118a-5376-412c-a991-eaebf0e6a46e] Running
	I0130 20:48:21.731830   45441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-877742" [cd5170b0-7d1c-45fd-9670-376d04e7016b] Running
	I0130 20:48:21.731834   45441 system_pods.go:61] "kube-proxy-59zvd" [ca6ef754-0898-4e1d-9ff2-9f42f456db6c] Running
	I0130 20:48:21.731838   45441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-877742" [5870d68e-b7af-408b-9484-a7e414bbe7f7] Running
	I0130 20:48:21.731845   45441 system_pods.go:61] "metrics-server-57f55c9bc5-xjc2m" [7b9a273b-d328-4ae8-925e-5bb305cfe574] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:48:21.731853   45441 system_pods.go:61] "storage-provisioner" [db1a28e4-0c45-496e-a566-32a402b0841d] Running
	I0130 20:48:21.731862   45441 system_pods.go:74] duration metric: took 3.864945598s to wait for pod list to return data ...
	I0130 20:48:21.731878   45441 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:48:21.734586   45441 default_sa.go:45] found service account: "default"
	I0130 20:48:21.734604   45441 default_sa.go:55] duration metric: took 2.721611ms for default service account to be created ...
	I0130 20:48:21.734611   45441 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:48:21.740794   45441 system_pods.go:86] 8 kube-system pods found
	I0130 20:48:21.740817   45441 system_pods.go:89] "coredns-5dd5756b68-tlb8h" [547c1fe4-3ef7-421a-b460-660a05caa2ab] Running
	I0130 20:48:21.740822   45441 system_pods.go:89] "etcd-default-k8s-diff-port-877742" [a8ff44ad-5fec-415b-a574-75bce55acf8e] Running
	I0130 20:48:21.740827   45441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-877742" [b183118a-5376-412c-a991-eaebf0e6a46e] Running
	I0130 20:48:21.740831   45441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-877742" [cd5170b0-7d1c-45fd-9670-376d04e7016b] Running
	I0130 20:48:21.740835   45441 system_pods.go:89] "kube-proxy-59zvd" [ca6ef754-0898-4e1d-9ff2-9f42f456db6c] Running
	I0130 20:48:21.740840   45441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-877742" [5870d68e-b7af-408b-9484-a7e414bbe7f7] Running
	I0130 20:48:21.740846   45441 system_pods.go:89] "metrics-server-57f55c9bc5-xjc2m" [7b9a273b-d328-4ae8-925e-5bb305cfe574] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:48:21.740853   45441 system_pods.go:89] "storage-provisioner" [db1a28e4-0c45-496e-a566-32a402b0841d] Running
	I0130 20:48:21.740860   45441 system_pods.go:126] duration metric: took 6.244006ms to wait for k8s-apps to be running ...
	I0130 20:48:21.740867   45441 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:48:21.740906   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:48:21.756380   45441 system_svc.go:56] duration metric: took 15.505755ms WaitForService to wait for kubelet.
	I0130 20:48:21.756405   45441 kubeadm.go:581] duration metric: took 4m15.894523943s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:48:21.756429   45441 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:48:21.759579   45441 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:48:21.759605   45441 node_conditions.go:123] node cpu capacity is 2
	I0130 20:48:21.759616   45441 node_conditions.go:105] duration metric: took 3.182491ms to run NodePressure ...
	I0130 20:48:21.759626   45441 start.go:228] waiting for startup goroutines ...
	I0130 20:48:21.759632   45441 start.go:233] waiting for cluster config update ...
	I0130 20:48:21.759642   45441 start.go:242] writing updated cluster config ...
	I0130 20:48:21.759879   45441 ssh_runner.go:195] Run: rm -f paused
	I0130 20:48:21.808471   45441 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 20:48:21.810628   45441 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-877742" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 20:38:35 UTC, ends at Tue 2024-01-30 20:57:23 UTC. --
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.522777655Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648243522762527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=14f27391-7b13-4bcb-92e6-6e3339525fe8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.523486190Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=72dd62e4-4a72-4623-8d04-6ccac21b4624 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.523588456Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=72dd62e4-4a72-4623-8d04-6ccac21b4624 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.523804943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06,PodSandboxId:1fc3944662b8d0b5fb57c838a2af035185febd102c2896bc7ff1caceb828d5cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647449161787190,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db1a28e4-0c45-496e-a566-32a402b0841d,},Annotations:map[string]string{io.kubernetes.container.hash: fa069038,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe,PodSandboxId:6235f75afb8495e85b6e93de545aa4475234eb83a70af77b92651226eb347b33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706647448348849188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-59zvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6ef754-0898-4e1d-9ff2-9f42f456db6c,},Annotations:map[string]string{io.kubernetes.container.hash: fc0ce254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb,PodSandboxId:dd24181a872bf8b7293c77bb33bb2df2421b8c86da93296fb364d481237e104f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706647447880044260,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tlb8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547c1fe4-3ef7-421a-b460-660a05caa2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 9eba2324,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15,PodSandboxId:860cedfaac3b1a7a22c5dc5445248817e838010afbb5cb6d34ea13a10a944831,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706647424639917248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea2ea4b2c15f963
45c2278a0529553,},Annotations:map[string]string{io.kubernetes.container.hash: 567c6d13,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7,PodSandboxId:1415e35a0f476876c8b6cd2446b5b3163487b8d45e7328127bfdc64e5a3f2cf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706647424520860936,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28b9261c0610f04c
da0f868a5f8092d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed,PodSandboxId:c2eead1ebd494f3b848e4ef6632be9bd0f0f3a9be20fcfe4e306723f974fb1e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706647424083566300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1ec6e77489a4ee974a22d52af3263b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481,PodSandboxId:5a6ccefe9a301e15a8fba5ade40baa6df4de253a70755e287d136b7dd2197abb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706647423966462673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9429139e233673bb34bc19e0a38b20e3,},Annotations:map[string]string{io.kubernetes.container.hash: 30868f22,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=72dd62e4-4a72-4623-8d04-6ccac21b4624 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.570606310Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ed6d330a-8cff-47ff-9a5c-fe622c3ba695 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.570691624Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ed6d330a-8cff-47ff-9a5c-fe622c3ba695 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.572002355Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=853a2437-3211-4293-ac3c-dea9f3430cac name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.572622259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648243572605004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=853a2437-3211-4293-ac3c-dea9f3430cac name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.573459731Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=87f166bc-1b6b-439a-a763-36e714245ef2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.573527899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=87f166bc-1b6b-439a-a763-36e714245ef2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.573734149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06,PodSandboxId:1fc3944662b8d0b5fb57c838a2af035185febd102c2896bc7ff1caceb828d5cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647449161787190,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db1a28e4-0c45-496e-a566-32a402b0841d,},Annotations:map[string]string{io.kubernetes.container.hash: fa069038,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe,PodSandboxId:6235f75afb8495e85b6e93de545aa4475234eb83a70af77b92651226eb347b33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706647448348849188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-59zvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6ef754-0898-4e1d-9ff2-9f42f456db6c,},Annotations:map[string]string{io.kubernetes.container.hash: fc0ce254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb,PodSandboxId:dd24181a872bf8b7293c77bb33bb2df2421b8c86da93296fb364d481237e104f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706647447880044260,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tlb8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547c1fe4-3ef7-421a-b460-660a05caa2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 9eba2324,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15,PodSandboxId:860cedfaac3b1a7a22c5dc5445248817e838010afbb5cb6d34ea13a10a944831,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706647424639917248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea2ea4b2c15f963
45c2278a0529553,},Annotations:map[string]string{io.kubernetes.container.hash: 567c6d13,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7,PodSandboxId:1415e35a0f476876c8b6cd2446b5b3163487b8d45e7328127bfdc64e5a3f2cf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706647424520860936,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28b9261c0610f04c
da0f868a5f8092d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed,PodSandboxId:c2eead1ebd494f3b848e4ef6632be9bd0f0f3a9be20fcfe4e306723f974fb1e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706647424083566300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1ec6e77489a4ee974a22d52af3263b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481,PodSandboxId:5a6ccefe9a301e15a8fba5ade40baa6df4de253a70755e287d136b7dd2197abb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706647423966462673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9429139e233673bb34bc19e0a38b20e3,},Annotations:map[string]string{io.kubernetes.container.hash: 30868f22,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=87f166bc-1b6b-439a-a763-36e714245ef2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.619000551Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0775984b-2d85-4ba5-9e04-615edd05d76b name=/runtime.v1.RuntimeService/Version
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.619083122Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0775984b-2d85-4ba5-9e04-615edd05d76b name=/runtime.v1.RuntimeService/Version
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.620611715Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d1700422-0cd4-4d91-8a97-a992a59c9e15 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.621139577Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648243621121295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d1700422-0cd4-4d91-8a97-a992a59c9e15 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.622047264Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e3a5c71b-51b4-4297-8077-fb923a70f2c3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.622111061Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e3a5c71b-51b4-4297-8077-fb923a70f2c3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.622307961Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06,PodSandboxId:1fc3944662b8d0b5fb57c838a2af035185febd102c2896bc7ff1caceb828d5cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647449161787190,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db1a28e4-0c45-496e-a566-32a402b0841d,},Annotations:map[string]string{io.kubernetes.container.hash: fa069038,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe,PodSandboxId:6235f75afb8495e85b6e93de545aa4475234eb83a70af77b92651226eb347b33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706647448348849188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-59zvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6ef754-0898-4e1d-9ff2-9f42f456db6c,},Annotations:map[string]string{io.kubernetes.container.hash: fc0ce254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb,PodSandboxId:dd24181a872bf8b7293c77bb33bb2df2421b8c86da93296fb364d481237e104f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706647447880044260,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tlb8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547c1fe4-3ef7-421a-b460-660a05caa2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 9eba2324,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15,PodSandboxId:860cedfaac3b1a7a22c5dc5445248817e838010afbb5cb6d34ea13a10a944831,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706647424639917248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea2ea4b2c15f963
45c2278a0529553,},Annotations:map[string]string{io.kubernetes.container.hash: 567c6d13,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7,PodSandboxId:1415e35a0f476876c8b6cd2446b5b3163487b8d45e7328127bfdc64e5a3f2cf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706647424520860936,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28b9261c0610f04c
da0f868a5f8092d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed,PodSandboxId:c2eead1ebd494f3b848e4ef6632be9bd0f0f3a9be20fcfe4e306723f974fb1e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706647424083566300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1ec6e77489a4ee974a22d52af3263b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481,PodSandboxId:5a6ccefe9a301e15a8fba5ade40baa6df4de253a70755e287d136b7dd2197abb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706647423966462673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9429139e233673bb34bc19e0a38b20e3,},Annotations:map[string]string{io.kubernetes.container.hash: 30868f22,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e3a5c71b-51b4-4297-8077-fb923a70f2c3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.665252550Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ad22da0b-0b0d-43f9-a2f7-cb49324b9819 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.665327224Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ad22da0b-0b0d-43f9-a2f7-cb49324b9819 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.667808221Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a9c738a3-5975-477f-a460-4fbecc3fd903 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.668370101Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648243668352158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a9c738a3-5975-477f-a460-4fbecc3fd903 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.669294506Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2c6eb7c5-a0b1-4e4c-8a65-6674af9c9339 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.669357554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2c6eb7c5-a0b1-4e4c-8a65-6674af9c9339 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:23 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 20:57:23.669628455Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06,PodSandboxId:1fc3944662b8d0b5fb57c838a2af035185febd102c2896bc7ff1caceb828d5cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647449161787190,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db1a28e4-0c45-496e-a566-32a402b0841d,},Annotations:map[string]string{io.kubernetes.container.hash: fa069038,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe,PodSandboxId:6235f75afb8495e85b6e93de545aa4475234eb83a70af77b92651226eb347b33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706647448348849188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-59zvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6ef754-0898-4e1d-9ff2-9f42f456db6c,},Annotations:map[string]string{io.kubernetes.container.hash: fc0ce254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb,PodSandboxId:dd24181a872bf8b7293c77bb33bb2df2421b8c86da93296fb364d481237e104f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706647447880044260,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tlb8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547c1fe4-3ef7-421a-b460-660a05caa2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 9eba2324,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15,PodSandboxId:860cedfaac3b1a7a22c5dc5445248817e838010afbb5cb6d34ea13a10a944831,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706647424639917248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea2ea4b2c15f963
45c2278a0529553,},Annotations:map[string]string{io.kubernetes.container.hash: 567c6d13,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7,PodSandboxId:1415e35a0f476876c8b6cd2446b5b3163487b8d45e7328127bfdc64e5a3f2cf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706647424520860936,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28b9261c0610f04c
da0f868a5f8092d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed,PodSandboxId:c2eead1ebd494f3b848e4ef6632be9bd0f0f3a9be20fcfe4e306723f974fb1e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706647424083566300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1ec6e77489a4ee974a22d52af3263b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481,PodSandboxId:5a6ccefe9a301e15a8fba5ade40baa6df4de253a70755e287d136b7dd2197abb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706647423966462673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9429139e233673bb34bc19e0a38b20e3,},Annotations:map[string]string{io.kubernetes.container.hash: 30868f22,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2c6eb7c5-a0b1-4e4c-8a65-6674af9c9339 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f3c5ab26cee1e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   1fc3944662b8d       storage-provisioner
	c9cf766ec1300       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago      Running             kube-proxy                0                   6235f75afb849       kube-proxy-59zvd
	215f206f1db56       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   dd24181a872bf       coredns-5dd5756b68-tlb8h
	1333c1b625367       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   13 minutes ago      Running             etcd                      2                   860cedfaac3b1       etcd-default-k8s-diff-port-877742
	8d7e4979680f6       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   13 minutes ago      Running             kube-scheduler            2                   1415e35a0f476       kube-scheduler-default-k8s-diff-port-877742
	1e755138850bd       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   13 minutes ago      Running             kube-controller-manager   2                   c2eead1ebd494       kube-controller-manager-default-k8s-diff-port-877742
	39f0a670e5557       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   13 minutes ago      Running             kube-apiserver            2                   5a6ccefe9a301       kube-apiserver-default-k8s-diff-port-877742
	
	
	==> coredns [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-877742
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-877742
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218
	                    minikube.k8s.io/name=default-k8s-diff-port-877742
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T20_43_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 20:43:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-877742
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 20:57:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 20:54:25 +0000   Tue, 30 Jan 2024 20:43:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 20:54:25 +0000   Tue, 30 Jan 2024 20:43:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 20:54:25 +0000   Tue, 30 Jan 2024 20:43:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 20:54:25 +0000   Tue, 30 Jan 2024 20:44:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.52
	  Hostname:    default-k8s-diff-port-877742
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a199b9d1d72948d8b4e58b7190dc3388
	  System UUID:                a199b9d1-d729-48d8-b4e5-8b7190dc3388
	  Boot ID:                    c404b1f1-c695-4f25-ba15-6261ad204f6c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-tlb8h                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-default-k8s-diff-port-877742                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-default-k8s-diff-port-877742             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-877742    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-59zvd                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-diff-port-877742             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-xjc2m                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-877742 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-877742 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-877742 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node default-k8s-diff-port-877742 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node default-k8s-diff-port-877742 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node default-k8s-diff-port-877742 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet          Node default-k8s-diff-port-877742 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet          Node default-k8s-diff-port-877742 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-877742 event: Registered Node default-k8s-diff-port-877742 in Controller
	
	
	==> dmesg <==
	[Jan30 20:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071777] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.505600] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.413006] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.141429] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.479128] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.573992] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.096543] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.132355] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.124552] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.279004] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[Jan30 20:39] systemd-fstab-generator[916]: Ignoring "noauto" for root device
	[ +22.299248] kauditd_printk_skb: 29 callbacks suppressed
	[Jan30 20:43] systemd-fstab-generator[3506]: Ignoring "noauto" for root device
	[  +9.285320] systemd-fstab-generator[3836]: Ignoring "noauto" for root device
	[Jan30 20:44] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15] <==
	{"level":"info","ts":"2024-01-30T20:43:46.50826Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.52:2380"}
	{"level":"info","ts":"2024-01-30T20:43:46.508295Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.52:2380"}
	{"level":"info","ts":"2024-01-30T20:43:46.508921Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-30T20:43:46.508855Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b89c2645334f67c2","initial-advertise-peer-urls":["https://192.168.72.52:2380"],"listen-peer-urls":["https://192.168.72.52:2380"],"advertise-client-urls":["https://192.168.72.52:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.52:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-30T20:43:46.859486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b89c2645334f67c2 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-30T20:43:46.859615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b89c2645334f67c2 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-30T20:43:46.859656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b89c2645334f67c2 received MsgPreVoteResp from b89c2645334f67c2 at term 1"}
	{"level":"info","ts":"2024-01-30T20:43:46.859687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b89c2645334f67c2 became candidate at term 2"}
	{"level":"info","ts":"2024-01-30T20:43:46.859711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b89c2645334f67c2 received MsgVoteResp from b89c2645334f67c2 at term 2"}
	{"level":"info","ts":"2024-01-30T20:43:46.859737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b89c2645334f67c2 became leader at term 2"}
	{"level":"info","ts":"2024-01-30T20:43:46.859763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b89c2645334f67c2 elected leader b89c2645334f67c2 at term 2"}
	{"level":"info","ts":"2024-01-30T20:43:46.864627Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T20:43:46.867658Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7062aa34dd277804","local-member-id":"b89c2645334f67c2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T20:43:46.867757Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T20:43:46.867799Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T20:43:46.867835Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b89c2645334f67c2","local-member-attributes":"{Name:default-k8s-diff-port-877742 ClientURLs:[https://192.168.72.52:2379]}","request-path":"/0/members/b89c2645334f67c2/attributes","cluster-id":"7062aa34dd277804","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-30T20:43:46.867865Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T20:43:46.868971Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.52:2379"}
	{"level":"info","ts":"2024-01-30T20:43:46.879573Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-30T20:43:46.879634Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-30T20:43:46.879772Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T20:43:46.880802Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-30T20:53:46.912215Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":710}
	{"level":"info","ts":"2024-01-30T20:53:46.914545Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":710,"took":"1.874676ms","hash":1666808174}
	{"level":"info","ts":"2024-01-30T20:53:46.914622Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1666808174,"revision":710,"compact-revision":-1}
	
	
	==> kernel <==
	 20:57:24 up 18 min,  0 users,  load average: 0.24, 0.20, 0.23
	Linux default-k8s-diff-port-877742 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481] <==
	I0130 20:53:48.544628       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 20:53:49.544634       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:53:49.544695       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 20:53:49.544704       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:53:49.544822       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:53:49.545010       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:53:49.545913       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 20:54:48.380967       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 20:54:49.546195       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:54:49.546292       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:54:49.546303       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:54:49.546195       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:54:49.546495       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 20:54:49.547816       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 20:55:48.381131       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0130 20:56:48.381559       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 20:56:49.546921       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:56:49.547156       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:56:49.547282       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:56:49.548169       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:56:49.548223       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 20:56:49.549466       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed] <==
	I0130 20:51:35.126209       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:52:04.640529       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:52:05.136066       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:52:34.646799       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:52:35.144526       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:53:04.652615       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:53:05.153463       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:53:34.657245       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:53:35.163215       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:54:04.667343       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:54:05.173448       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:54:34.673728       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:54:35.183121       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:55:04.680280       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:55:05.193530       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0130 20:55:13.238512       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="976.483µs"
	I0130 20:55:24.237554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="457.999µs"
	E0130 20:55:34.685702       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:55:35.202334       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:56:04.694104       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:56:05.211713       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:56:34.699710       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:56:35.222742       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:57:04.706288       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:57:05.234278       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe] <==
	I0130 20:44:08.907300       1 server_others.go:69] "Using iptables proxy"
	I0130 20:44:08.955957       1 node.go:141] Successfully retrieved node IP: 192.168.72.52
	I0130 20:44:09.031474       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0130 20:44:09.031521       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0130 20:44:09.041602       1 server_others.go:152] "Using iptables Proxier"
	I0130 20:44:09.041979       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0130 20:44:09.042677       1 server.go:846] "Version info" version="v1.28.4"
	I0130 20:44:09.042740       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 20:44:09.044891       1 config.go:188] "Starting service config controller"
	I0130 20:44:09.045813       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0130 20:44:09.045997       1 config.go:315] "Starting node config controller"
	I0130 20:44:09.046037       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0130 20:44:09.048191       1 config.go:97] "Starting endpoint slice config controller"
	I0130 20:44:09.048235       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0130 20:44:09.146597       1 shared_informer.go:318] Caches are synced for node config
	I0130 20:44:09.146736       1 shared_informer.go:318] Caches are synced for service config
	I0130 20:44:09.149059       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7] <==
	W0130 20:43:48.556686       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 20:43:48.556838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0130 20:43:48.557152       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0130 20:43:48.557196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0130 20:43:48.557338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0130 20:43:48.557350       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0130 20:43:48.557458       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 20:43:48.557470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0130 20:43:49.371093       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0130 20:43:49.371691       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0130 20:43:49.373331       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0130 20:43:49.373495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0130 20:43:49.401521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0130 20:43:49.401648       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0130 20:43:49.427513       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0130 20:43:49.427606       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 20:43:49.474725       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0130 20:43:49.474748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0130 20:43:49.491589       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0130 20:43:49.491652       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0130 20:43:49.579646       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0130 20:43:49.579831       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0130 20:43:49.793672       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0130 20:43:49.793764       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0130 20:43:52.645477       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 20:38:35 UTC, ends at Tue 2024-01-30 20:57:24 UTC. --
	Jan 30 20:54:52 default-k8s-diff-port-877742 kubelet[3843]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:54:52 default-k8s-diff-port-877742 kubelet[3843]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 20:54:59 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:54:59.228696    3843 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 30 20:54:59 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:54:59.228793    3843 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 30 20:54:59 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:54:59.229030    3843 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hkp9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-xjc2m_kube-system(7b9a273b-d328-4ae8-925e-5bb305cfe574): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 30 20:54:59 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:54:59.229102    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:55:13 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:55:13.217954    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:55:24 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:55:24.217744    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:55:37 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:55:37.217164    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:55:50 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:55:50.218263    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:55:52 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:55:52.266278    3843 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 20:55:52 default-k8s-diff-port-877742 kubelet[3843]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 20:55:52 default-k8s-diff-port-877742 kubelet[3843]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:55:52 default-k8s-diff-port-877742 kubelet[3843]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 20:56:02 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:56:02.217126    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:56:17 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:56:17.216534    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:56:31 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:56:31.217003    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:56:44 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:56:44.216547    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:56:52 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:56:52.271458    3843 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 20:56:52 default-k8s-diff-port-877742 kubelet[3843]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 20:56:52 default-k8s-diff-port-877742 kubelet[3843]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:56:52 default-k8s-diff-port-877742 kubelet[3843]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 20:56:55 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:56:55.216841    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:57:08 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:57:08.217023    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:57:21 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:57:21.216740    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	
	
	==> storage-provisioner [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06] <==
	I0130 20:44:09.272180       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 20:44:09.289039       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 20:44:09.289242       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 20:44:09.299966       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 20:44:09.300219       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-877742_a7f17109-70f2-469e-89f8-8a72dd6e5923!
	I0130 20:44:09.300929       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6058efe4-4925-4878-86f5-a6ec8615d032", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-877742_a7f17109-70f2-469e-89f8-8a72dd6e5923 became leader
	I0130 20:44:09.401549       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-877742_a7f17109-70f2-469e-89f8-8a72dd6e5923!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-877742 -n default-k8s-diff-port-877742
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-877742 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-xjc2m
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-877742 describe pod metrics-server-57f55c9bc5-xjc2m
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-877742 describe pod metrics-server-57f55c9bc5-xjc2m: exit status 1 (65.457985ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-xjc2m" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-877742 describe pod metrics-server-57f55c9bc5-xjc2m: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (352.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0130 20:52:54.229654   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 20:53:07.771683   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-208583 -n embed-certs-208583
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-30 20:58:09.733122618 +0000 UTC m=+5726.810092415
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-208583 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-208583 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.11µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-208583 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208583 -n embed-certs-208583
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-208583 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-208583 logs -n 25: (1.317581198s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-757744 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | disable-driver-mounts-757744                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:31 UTC |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-473743             | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-473743                                   | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-208583            | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:31 UTC | 30 Jan 24 20:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:31 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-877742  | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:32 UTC | 30 Jan 24 20:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:32 UTC |                     |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-473743                  | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-208583                 | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-473743                                   | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:44 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-150971        | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-877742       | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:34 UTC | 30 Jan 24 20:48 UTC |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-150971             | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:36 UTC | 30 Jan 24 20:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:57 UTC | 30 Jan 24 20:57 UTC |
	| start   | -p newest-cni-564644 --memory=2200 --alsologtostderr   | newest-cni-564644            | jenkins | v1.32.0 | 30 Jan 24 20:57 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-473743                                   | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:57 UTC | 30 Jan 24 20:57 UTC |
	| start   | -p auto-997045 --memory=3072                           | auto-997045                  | jenkins | v1.32.0 | 30 Jan 24 20:57 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 20:57:59
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 20:57:59.540895   50715 out.go:296] Setting OutFile to fd 1 ...
	I0130 20:57:59.541008   50715 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:57:59.541018   50715 out.go:309] Setting ErrFile to fd 2...
	I0130 20:57:59.541023   50715 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:57:59.541241   50715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 20:57:59.541905   50715 out.go:303] Setting JSON to false
	I0130 20:57:59.542836   50715 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6025,"bootTime":1706642255,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 20:57:59.542890   50715 start.go:138] virtualization: kvm guest
	I0130 20:57:59.545258   50715 out.go:177] * [auto-997045] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 20:57:59.546630   50715 out.go:177]   - MINIKUBE_LOCATION=18007
	I0130 20:57:59.546679   50715 notify.go:220] Checking for updates...
	I0130 20:57:59.547944   50715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 20:57:59.549294   50715 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:57:59.550604   50715 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 20:57:59.551993   50715 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 20:57:59.553317   50715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 20:57:59.554929   50715 config.go:182] Loaded profile config "default-k8s-diff-port-877742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:57:59.555063   50715 config.go:182] Loaded profile config "embed-certs-208583": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:57:59.555189   50715 config.go:182] Loaded profile config "newest-cni-564644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 20:57:59.555299   50715 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 20:57:59.591912   50715 out.go:177] * Using the kvm2 driver based on user configuration
	I0130 20:57:59.593058   50715 start.go:298] selected driver: kvm2
	I0130 20:57:59.593070   50715 start.go:902] validating driver "kvm2" against <nil>
	I0130 20:57:59.593090   50715 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 20:57:59.594044   50715 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 20:57:59.594132   50715 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18007-4458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 20:57:59.609037   50715 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 20:57:59.609096   50715 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0130 20:57:59.609334   50715 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 20:57:59.609411   50715 cni.go:84] Creating CNI manager for ""
	I0130 20:57:59.609431   50715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:57:59.609444   50715 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0130 20:57:59.609458   50715 start_flags.go:321] config:
	{Name:auto-997045 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-997045 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:57:59.609630   50715 iso.go:125] acquiring lock: {Name:mk072ab123730f3058e85a91672f85e887bd47af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 20:57:59.611291   50715 out.go:177] * Starting control plane node auto-997045 in cluster auto-997045
	I0130 20:57:56.465702   50429 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0130 20:57:56.465826   50429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:57:56.465869   50429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:57:56.481469   50429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39435
	I0130 20:57:56.481874   50429 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:57:56.482389   50429 main.go:141] libmachine: Using API Version  1
	I0130 20:57:56.482427   50429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:57:56.482770   50429 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:57:56.482969   50429 main.go:141] libmachine: (newest-cni-564644) Calling .GetMachineName
	I0130 20:57:56.483106   50429 main.go:141] libmachine: (newest-cni-564644) Calling .DriverName
	I0130 20:57:56.483300   50429 start.go:159] libmachine.API.Create for "newest-cni-564644" (driver="kvm2")
	I0130 20:57:56.483337   50429 client.go:168] LocalClient.Create starting
	I0130 20:57:56.483363   50429 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem
	I0130 20:57:56.483396   50429 main.go:141] libmachine: Decoding PEM data...
	I0130 20:57:56.483416   50429 main.go:141] libmachine: Parsing certificate...
	I0130 20:57:56.483471   50429 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem
	I0130 20:57:56.483490   50429 main.go:141] libmachine: Decoding PEM data...
	I0130 20:57:56.483504   50429 main.go:141] libmachine: Parsing certificate...
	I0130 20:57:56.483517   50429 main.go:141] libmachine: Running pre-create checks...
	I0130 20:57:56.483527   50429 main.go:141] libmachine: (newest-cni-564644) Calling .PreCreateCheck
	I0130 20:57:56.483986   50429 main.go:141] libmachine: (newest-cni-564644) Calling .GetConfigRaw
	I0130 20:57:56.484692   50429 main.go:141] libmachine: Creating machine...
	I0130 20:57:56.484720   50429 main.go:141] libmachine: (newest-cni-564644) Calling .Create
	I0130 20:57:56.485004   50429 main.go:141] libmachine: (newest-cni-564644) Creating KVM machine...
	I0130 20:57:56.486315   50429 main.go:141] libmachine: (newest-cni-564644) DBG | found existing default KVM network
	I0130 20:57:56.487968   50429 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:57:56.487732   50472 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00025e050}
	I0130 20:57:56.493064   50429 main.go:141] libmachine: (newest-cni-564644) DBG | trying to create private KVM network mk-newest-cni-564644 192.168.39.0/24...
	I0130 20:57:56.570326   50429 main.go:141] libmachine: (newest-cni-564644) DBG | private KVM network mk-newest-cni-564644 192.168.39.0/24 created
	I0130 20:57:56.570463   50429 main.go:141] libmachine: (newest-cni-564644) Setting up store path in /home/jenkins/minikube-integration/18007-4458/.minikube/machines/newest-cni-564644 ...
	I0130 20:57:56.570496   50429 main.go:141] libmachine: (newest-cni-564644) Building disk image from file:///home/jenkins/minikube-integration/18007-4458/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0130 20:57:56.570511   50429 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:57:56.570431   50472 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 20:57:56.570622   50429 main.go:141] libmachine: (newest-cni-564644) Downloading /home/jenkins/minikube-integration/18007-4458/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18007-4458/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0130 20:57:56.793140   50429 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:57:56.793036   50472 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/newest-cni-564644/id_rsa...
	I0130 20:57:57.128033   50429 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:57:57.127904   50472 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/newest-cni-564644/newest-cni-564644.rawdisk...
	I0130 20:57:57.128080   50429 main.go:141] libmachine: (newest-cni-564644) DBG | Writing magic tar header
	I0130 20:57:57.128133   50429 main.go:141] libmachine: (newest-cni-564644) DBG | Writing SSH key tar header
	I0130 20:57:57.128171   50429 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:57:57.128035   50472 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18007-4458/.minikube/machines/newest-cni-564644 ...
	I0130 20:57:57.128197   50429 main.go:141] libmachine: (newest-cni-564644) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/newest-cni-564644
	I0130 20:57:57.128211   50429 main.go:141] libmachine: (newest-cni-564644) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18007-4458/.minikube/machines
	I0130 20:57:57.128227   50429 main.go:141] libmachine: (newest-cni-564644) Setting executable bit set on /home/jenkins/minikube-integration/18007-4458/.minikube/machines/newest-cni-564644 (perms=drwx------)
	I0130 20:57:57.128241   50429 main.go:141] libmachine: (newest-cni-564644) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 20:57:57.128256   50429 main.go:141] libmachine: (newest-cni-564644) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18007-4458
	I0130 20:57:57.128265   50429 main.go:141] libmachine: (newest-cni-564644) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0130 20:57:57.128276   50429 main.go:141] libmachine: (newest-cni-564644) DBG | Checking permissions on dir: /home/jenkins
	I0130 20:57:57.128285   50429 main.go:141] libmachine: (newest-cni-564644) DBG | Checking permissions on dir: /home
	I0130 20:57:57.128298   50429 main.go:141] libmachine: (newest-cni-564644) DBG | Skipping /home - not owner
	I0130 20:57:57.128351   50429 main.go:141] libmachine: (newest-cni-564644) Setting executable bit set on /home/jenkins/minikube-integration/18007-4458/.minikube/machines (perms=drwxr-xr-x)
	I0130 20:57:57.128378   50429 main.go:141] libmachine: (newest-cni-564644) Setting executable bit set on /home/jenkins/minikube-integration/18007-4458/.minikube (perms=drwxr-xr-x)
	I0130 20:57:57.128392   50429 main.go:141] libmachine: (newest-cni-564644) Setting executable bit set on /home/jenkins/minikube-integration/18007-4458 (perms=drwxrwxr-x)
	I0130 20:57:57.128403   50429 main.go:141] libmachine: (newest-cni-564644) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0130 20:57:57.128420   50429 main.go:141] libmachine: (newest-cni-564644) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0130 20:57:57.128438   50429 main.go:141] libmachine: (newest-cni-564644) Creating domain...
	I0130 20:57:57.129617   50429 main.go:141] libmachine: (newest-cni-564644) define libvirt domain using xml: 
	I0130 20:57:57.129636   50429 main.go:141] libmachine: (newest-cni-564644) <domain type='kvm'>
	I0130 20:57:57.129647   50429 main.go:141] libmachine: (newest-cni-564644)   <name>newest-cni-564644</name>
	I0130 20:57:57.129657   50429 main.go:141] libmachine: (newest-cni-564644)   <memory unit='MiB'>2200</memory>
	I0130 20:57:57.129667   50429 main.go:141] libmachine: (newest-cni-564644)   <vcpu>2</vcpu>
	I0130 20:57:57.129675   50429 main.go:141] libmachine: (newest-cni-564644)   <features>
	I0130 20:57:57.129685   50429 main.go:141] libmachine: (newest-cni-564644)     <acpi/>
	I0130 20:57:57.129692   50429 main.go:141] libmachine: (newest-cni-564644)     <apic/>
	I0130 20:57:57.129701   50429 main.go:141] libmachine: (newest-cni-564644)     <pae/>
	I0130 20:57:57.129715   50429 main.go:141] libmachine: (newest-cni-564644)     
	I0130 20:57:57.129725   50429 main.go:141] libmachine: (newest-cni-564644)   </features>
	I0130 20:57:57.129734   50429 main.go:141] libmachine: (newest-cni-564644)   <cpu mode='host-passthrough'>
	I0130 20:57:57.129743   50429 main.go:141] libmachine: (newest-cni-564644)   
	I0130 20:57:57.129750   50429 main.go:141] libmachine: (newest-cni-564644)   </cpu>
	I0130 20:57:57.129773   50429 main.go:141] libmachine: (newest-cni-564644)   <os>
	I0130 20:57:57.129787   50429 main.go:141] libmachine: (newest-cni-564644)     <type>hvm</type>
	I0130 20:57:57.129797   50429 main.go:141] libmachine: (newest-cni-564644)     <boot dev='cdrom'/>
	I0130 20:57:57.129805   50429 main.go:141] libmachine: (newest-cni-564644)     <boot dev='hd'/>
	I0130 20:57:57.129815   50429 main.go:141] libmachine: (newest-cni-564644)     <bootmenu enable='no'/>
	I0130 20:57:57.129824   50429 main.go:141] libmachine: (newest-cni-564644)   </os>
	I0130 20:57:57.129833   50429 main.go:141] libmachine: (newest-cni-564644)   <devices>
	I0130 20:57:57.129842   50429 main.go:141] libmachine: (newest-cni-564644)     <disk type='file' device='cdrom'>
	I0130 20:57:57.129856   50429 main.go:141] libmachine: (newest-cni-564644)       <source file='/home/jenkins/minikube-integration/18007-4458/.minikube/machines/newest-cni-564644/boot2docker.iso'/>
	I0130 20:57:57.129867   50429 main.go:141] libmachine: (newest-cni-564644)       <target dev='hdc' bus='scsi'/>
	I0130 20:57:57.129877   50429 main.go:141] libmachine: (newest-cni-564644)       <readonly/>
	I0130 20:57:57.129885   50429 main.go:141] libmachine: (newest-cni-564644)     </disk>
	I0130 20:57:57.129895   50429 main.go:141] libmachine: (newest-cni-564644)     <disk type='file' device='disk'>
	I0130 20:57:57.129906   50429 main.go:141] libmachine: (newest-cni-564644)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0130 20:57:57.129921   50429 main.go:141] libmachine: (newest-cni-564644)       <source file='/home/jenkins/minikube-integration/18007-4458/.minikube/machines/newest-cni-564644/newest-cni-564644.rawdisk'/>
	I0130 20:57:57.129930   50429 main.go:141] libmachine: (newest-cni-564644)       <target dev='hda' bus='virtio'/>
	I0130 20:57:57.129944   50429 main.go:141] libmachine: (newest-cni-564644)     </disk>
	I0130 20:57:57.129953   50429 main.go:141] libmachine: (newest-cni-564644)     <interface type='network'>
	I0130 20:57:57.129965   50429 main.go:141] libmachine: (newest-cni-564644)       <source network='mk-newest-cni-564644'/>
	I0130 20:57:57.129974   50429 main.go:141] libmachine: (newest-cni-564644)       <model type='virtio'/>
	I0130 20:57:57.129984   50429 main.go:141] libmachine: (newest-cni-564644)     </interface>
	I0130 20:57:57.129993   50429 main.go:141] libmachine: (newest-cni-564644)     <interface type='network'>
	I0130 20:57:57.130004   50429 main.go:141] libmachine: (newest-cni-564644)       <source network='default'/>
	I0130 20:57:57.130013   50429 main.go:141] libmachine: (newest-cni-564644)       <model type='virtio'/>
	I0130 20:57:57.130022   50429 main.go:141] libmachine: (newest-cni-564644)     </interface>
	I0130 20:57:57.130031   50429 main.go:141] libmachine: (newest-cni-564644)     <serial type='pty'>
	I0130 20:57:57.130044   50429 main.go:141] libmachine: (newest-cni-564644)       <target port='0'/>
	I0130 20:57:57.130052   50429 main.go:141] libmachine: (newest-cni-564644)     </serial>
	I0130 20:57:57.130062   50429 main.go:141] libmachine: (newest-cni-564644)     <console type='pty'>
	I0130 20:57:57.130071   50429 main.go:141] libmachine: (newest-cni-564644)       <target type='serial' port='0'/>
	I0130 20:57:57.130081   50429 main.go:141] libmachine: (newest-cni-564644)     </console>
	I0130 20:57:57.130090   50429 main.go:141] libmachine: (newest-cni-564644)     <rng model='virtio'>
	I0130 20:57:57.130101   50429 main.go:141] libmachine: (newest-cni-564644)       <backend model='random'>/dev/random</backend>
	I0130 20:57:57.130109   50429 main.go:141] libmachine: (newest-cni-564644)     </rng>
	I0130 20:57:57.130122   50429 main.go:141] libmachine: (newest-cni-564644)     
	I0130 20:57:57.130130   50429 main.go:141] libmachine: (newest-cni-564644)     
	I0130 20:57:57.130139   50429 main.go:141] libmachine: (newest-cni-564644)   </devices>
	I0130 20:57:57.130146   50429 main.go:141] libmachine: (newest-cni-564644) </domain>
	I0130 20:57:57.130157   50429 main.go:141] libmachine: (newest-cni-564644) 
	I0130 20:57:57.134944   50429 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:02:b7:98 in network default
	I0130 20:57:57.135565   50429 main.go:141] libmachine: (newest-cni-564644) Ensuring networks are active...
	I0130 20:57:57.135586   50429 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:57:57.136277   50429 main.go:141] libmachine: (newest-cni-564644) Ensuring network default is active
	I0130 20:57:57.136689   50429 main.go:141] libmachine: (newest-cni-564644) Ensuring network mk-newest-cni-564644 is active
	I0130 20:57:57.137323   50429 main.go:141] libmachine: (newest-cni-564644) Getting domain xml...
	I0130 20:57:57.138275   50429 main.go:141] libmachine: (newest-cni-564644) Creating domain...
	I0130 20:57:58.541330   50429 main.go:141] libmachine: (newest-cni-564644) Waiting to get IP...
	I0130 20:57:58.542042   50429 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:57:58.543594   50429 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:57:58.543623   50429 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:57:58.543564   50472 retry.go:31] will retry after 200.22539ms: waiting for machine to come up
	I0130 20:57:59.230043   50429 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:57:59.230505   50429 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:57:59.230530   50429 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:57:59.230474   50472 retry.go:31] will retry after 280.796817ms: waiting for machine to come up
	I0130 20:57:59.512923   50429 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:57:59.513457   50429 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:57:59.513485   50429 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:57:59.513420   50472 retry.go:31] will retry after 487.513429ms: waiting for machine to come up
	I0130 20:58:00.002134   50429 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:58:00.002618   50429 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:58:00.002641   50429 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:58:00.002582   50472 retry.go:31] will retry after 471.637363ms: waiting for machine to come up
	I0130 20:58:00.475842   50429 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:58:00.476347   50429 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:58:00.476378   50429 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:58:00.476288   50472 retry.go:31] will retry after 713.191987ms: waiting for machine to come up
	I0130 20:58:01.190917   50429 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:58:01.191332   50429 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:58:01.191362   50429 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:58:01.191287   50472 retry.go:31] will retry after 887.469948ms: waiting for machine to come up
	I0130 20:57:59.612501   50715 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 20:57:59.612535   50715 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0130 20:57:59.612548   50715 cache.go:56] Caching tarball of preloaded images
	I0130 20:57:59.612619   50715 preload.go:174] Found /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 20:57:59.612631   50715 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0130 20:57:59.612731   50715 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/auto-997045/config.json ...
	I0130 20:57:59.612756   50715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/auto-997045/config.json: {Name:mk94964dae2bac6865bb22bef97dad8fda8f5ef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:57:59.612902   50715 start.go:365] acquiring machines lock for auto-997045: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 20:58:02.080735   50429 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:58:02.081159   50429 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:58:02.081193   50429 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:58:02.081108   50472 retry.go:31] will retry after 817.297334ms: waiting for machine to come up
	I0130 20:58:02.900039   50429 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:58:02.900455   50429 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:58:02.900480   50429 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:58:02.900404   50472 retry.go:31] will retry after 903.965896ms: waiting for machine to come up
	I0130 20:58:03.806399   50429 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:58:03.806757   50429 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:58:03.806786   50429 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:58:03.806710   50472 retry.go:31] will retry after 1.46891859s: waiting for machine to come up
	I0130 20:58:05.277291   50429 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:58:05.277831   50429 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:58:05.277860   50429 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:58:05.277773   50472 retry.go:31] will retry after 2.327864379s: waiting for machine to come up
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 20:38:14 UTC, ends at Tue 2024-01-30 20:58:10 UTC. --
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.498576753Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:66b3a844ac9d2844629589a74faba10a47448c961a1e3a1c9f27a470b7ab5f7b,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-jqzzv,Uid:59f362b6-606e-4bcd-b5eb-c8822aaf8b9c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647137179881512,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-jqzzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59f362b6-606e-4bcd-b5eb-c8822aaf8b9c,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T20:38:49.212356417Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0c8049b581240989535266df9a54a3c8b0139ff64661303bb79927b4e76bf48e,Metadata:&PodSandboxMetadata{Name:busybox,Uid:689c9651-345a-43fd-aa34-90f6d5e6af09,Namespace:default,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1706647137159377230,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 689c9651-345a-43fd-aa34-90f6d5e6af09,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T20:38:49.212400857Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9873696caf780275be0944cb14326234f5395fe5bffad99bd642df03d597bb6b,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-ghg9n,Uid:37700115-83e9-440a-b396-56f50adb6311,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647134801412996,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-ghg9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37700115-83e9-440a-b396-56f50adb6311,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T20:38:49.
212406510Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b9919313ba9b5930a3c49678e0e22bd83083ba0e16b63fc272fc247d8caa1a6c,Metadata:&PodSandboxMetadata{Name:kube-proxy-g7q5t,Uid:47f109e0-7a56-472f-8c7e-ba2b138de352,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647129573143772,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-g7q5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47f109e0-7a56-472f-8c7e-ba2b138de352,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T20:38:49.212397618Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab9925835e346411b26bc8894ec94e416b909be80a6b1d371ffc7c4be7635601,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:15108916-a630-4208-99f7-5706db407b22,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647129549130862,Labels:map[string
]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15108916-a630-4208-99f7-5706db407b22,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.i
o/config.seen: 2024-01-30T20:38:49.212408057Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a84f96548609d7037f5403820927cdeba2fb19dee949b6dc469a39c510bda8f8,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-208583,Uid:3263ac53d6b91bfa78c53088de606433,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647122725868939,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3263ac53d6b91bfa78c53088de606433,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.63:2379,kubernetes.io/config.hash: 3263ac53d6b91bfa78c53088de606433,kubernetes.io/config.seen: 2024-01-30T20:38:42.204461655Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:449d84e5ef66c8dc96ece0f76be94bbb4a99f48e32ea3cde50e251dca3e7a670,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-c
erts-208583,Uid:8209177f62ae28e095966ad6f0cbbaa0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647122708924394,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8209177f62ae28e095966ad6f0cbbaa0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8209177f62ae28e095966ad6f0cbbaa0,kubernetes.io/config.seen: 2024-01-30T20:38:42.204459485Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4fb4b82b20065edcb49c98a7ee285d373ebcf0ea192cb88232862c8e887166f3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-208583,Uid:b0e3c20f03b0f0b3970d7212f3c0b776,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647122692638249,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-2085
83,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0e3c20f03b0f0b3970d7212f3c0b776,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b0e3c20f03b0f0b3970d7212f3c0b776,kubernetes.io/config.seen: 2024-01-30T20:38:42.204460734Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:74457383bf69a71606341c6e8c2b0a0f1f7a82460f41cc7a2168177a7c019a1d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-208583,Uid:e99eedee0b4268817b10691671423352,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647122673549025,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e99eedee0b4268817b10691671423352,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.63:8443,kubernetes.io/config.hash: e99eedee0b4268817b1069167142
3352,kubernetes.io/config.seen: 2024-01-30T20:38:42.204455535Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=df3e3775-3c4a-4ba6-ad4e-39ef88a4b0d6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.500086279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f986f6bf-44ae-4313-8d4a-f1b724281037 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.500156767Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f986f6bf-44ae-4313-8d4a-f1b724281037 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.500436269Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac,PodSandboxId:ab9925835e346411b26bc8894ec94e416b909be80a6b1d371ffc7c4be7635601,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647161473504325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15108916-a630-4208-99f7-5706db407b22,},Annotations:map[string]string{io.kubernetes.container.hash: 40a6b532,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb4953867b0d06ea86a5c05932a01453ca0ed667a443bdf9ede0606f1821bb9,PodSandboxId:0c8049b581240989535266df9a54a3c8b0139ff64661303bb79927b4e76bf48e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706647140868168618,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 689c9651-345a-43fd-aa34-90f6d5e6af09,},Annotations:map[string]string{io.kubernetes.container.hash: ec603722,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d,PodSandboxId:66b3a844ac9d2844629589a74faba10a47448c961a1e3a1c9f27a470b7ab5f7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706647137891660189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jqzzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59f362b6-606e-4bcd-b5eb-c8822aaf8b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 923a1a71,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5,PodSandboxId:ab9925835e346411b26bc8894ec94e416b909be80a6b1d371ffc7c4be7635601,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706647130198167780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 15108916-a630-4208-99f7-5706db407b22,},Annotations:map[string]string{io.kubernetes.container.hash: 40a6b532,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254,PodSandboxId:b9919313ba9b5930a3c49678e0e22bd83083ba0e16b63fc272fc247d8caa1a6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706647130140498010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g7q5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47f109e0-7a
56-472f-8c7e-ba2b138de352,},Annotations:map[string]string{io.kubernetes.container.hash: 396cdb76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18,PodSandboxId:a84f96548609d7037f5403820927cdeba2fb19dee949b6dc469a39c510bda8f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706647123760271132,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3263ac53d6b91bfa78c53088de606433,},Annotations:map[string
]string{io.kubernetes.container.hash: 42a47b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f,PodSandboxId:4fb4b82b20065edcb49c98a7ee285d373ebcf0ea192cb88232862c8e887166f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706647123516304267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0e3c20f03b0f0b3970d7212f3c0b776,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2,PodSandboxId:449d84e5ef66c8dc96ece0f76be94bbb4a99f48e32ea3cde50e251dca3e7a670,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706647123431731822,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8209177f62ae28e095966ad6f0cbbaa0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d,PodSandboxId:74457383bf69a71606341c6e8c2b0a0f1f7a82460f41cc7a2168177a7c019a1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706647123178606099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e99eedee0b4268817b10691671423352,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 94585cf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f986f6bf-44ae-4313-8d4a-f1b724281037 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.523575711Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3fe733ad-d9b0-4f51-b074-d4fc9d19775e name=/runtime.v1.RuntimeService/Version
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.523656345Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3fe733ad-d9b0-4f51-b074-d4fc9d19775e name=/runtime.v1.RuntimeService/Version
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.526037139Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fb1c7189-ca65-478a-a11a-e3dd03f5390e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.526562924Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648290526546152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=fb1c7189-ca65-478a-a11a-e3dd03f5390e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.527714669Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e0565338-cad7-4ad5-a9ad-d9869dfb8d3f name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.527875956Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e0565338-cad7-4ad5-a9ad-d9869dfb8d3f name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.529229922Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac,PodSandboxId:ab9925835e346411b26bc8894ec94e416b909be80a6b1d371ffc7c4be7635601,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647161473504325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15108916-a630-4208-99f7-5706db407b22,},Annotations:map[string]string{io.kubernetes.container.hash: 40a6b532,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb4953867b0d06ea86a5c05932a01453ca0ed667a443bdf9ede0606f1821bb9,PodSandboxId:0c8049b581240989535266df9a54a3c8b0139ff64661303bb79927b4e76bf48e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706647140868168618,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 689c9651-345a-43fd-aa34-90f6d5e6af09,},Annotations:map[string]string{io.kubernetes.container.hash: ec603722,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d,PodSandboxId:66b3a844ac9d2844629589a74faba10a47448c961a1e3a1c9f27a470b7ab5f7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706647137891660189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jqzzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59f362b6-606e-4bcd-b5eb-c8822aaf8b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 923a1a71,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5,PodSandboxId:ab9925835e346411b26bc8894ec94e416b909be80a6b1d371ffc7c4be7635601,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706647130198167780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 15108916-a630-4208-99f7-5706db407b22,},Annotations:map[string]string{io.kubernetes.container.hash: 40a6b532,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254,PodSandboxId:b9919313ba9b5930a3c49678e0e22bd83083ba0e16b63fc272fc247d8caa1a6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706647130140498010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g7q5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47f109e0-7a
56-472f-8c7e-ba2b138de352,},Annotations:map[string]string{io.kubernetes.container.hash: 396cdb76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18,PodSandboxId:a84f96548609d7037f5403820927cdeba2fb19dee949b6dc469a39c510bda8f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706647123760271132,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3263ac53d6b91bfa78c53088de606433,},Annotations:map[string
]string{io.kubernetes.container.hash: 42a47b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f,PodSandboxId:4fb4b82b20065edcb49c98a7ee285d373ebcf0ea192cb88232862c8e887166f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706647123516304267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0e3c20f03b0f0b3970d7212f3c0b776,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2,PodSandboxId:449d84e5ef66c8dc96ece0f76be94bbb4a99f48e32ea3cde50e251dca3e7a670,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706647123431731822,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8209177f62ae28e095966ad6f0cbbaa0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d,PodSandboxId:74457383bf69a71606341c6e8c2b0a0f1f7a82460f41cc7a2168177a7c019a1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706647123178606099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e99eedee0b4268817b10691671423352,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 94585cf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e0565338-cad7-4ad5-a9ad-d9869dfb8d3f name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.580886788Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=efc339b0-7f97-4ccf-83bf-e5283bcc3039 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.580981313Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=efc339b0-7f97-4ccf-83bf-e5283bcc3039 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.582054961Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=030e2f9e-3c0d-47d5-bf02-07572d77e787 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.582422158Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648290582409486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=030e2f9e-3c0d-47d5-bf02-07572d77e787 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.583237438Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c8e56aba-f11a-4bae-a3c7-9cd05c07d39d name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.583305215Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c8e56aba-f11a-4bae-a3c7-9cd05c07d39d name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.583508715Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac,PodSandboxId:ab9925835e346411b26bc8894ec94e416b909be80a6b1d371ffc7c4be7635601,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647161473504325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15108916-a630-4208-99f7-5706db407b22,},Annotations:map[string]string{io.kubernetes.container.hash: 40a6b532,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb4953867b0d06ea86a5c05932a01453ca0ed667a443bdf9ede0606f1821bb9,PodSandboxId:0c8049b581240989535266df9a54a3c8b0139ff64661303bb79927b4e76bf48e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706647140868168618,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 689c9651-345a-43fd-aa34-90f6d5e6af09,},Annotations:map[string]string{io.kubernetes.container.hash: ec603722,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d,PodSandboxId:66b3a844ac9d2844629589a74faba10a47448c961a1e3a1c9f27a470b7ab5f7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706647137891660189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jqzzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59f362b6-606e-4bcd-b5eb-c8822aaf8b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 923a1a71,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5,PodSandboxId:ab9925835e346411b26bc8894ec94e416b909be80a6b1d371ffc7c4be7635601,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706647130198167780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 15108916-a630-4208-99f7-5706db407b22,},Annotations:map[string]string{io.kubernetes.container.hash: 40a6b532,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254,PodSandboxId:b9919313ba9b5930a3c49678e0e22bd83083ba0e16b63fc272fc247d8caa1a6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706647130140498010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g7q5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47f109e0-7a
56-472f-8c7e-ba2b138de352,},Annotations:map[string]string{io.kubernetes.container.hash: 396cdb76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18,PodSandboxId:a84f96548609d7037f5403820927cdeba2fb19dee949b6dc469a39c510bda8f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706647123760271132,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3263ac53d6b91bfa78c53088de606433,},Annotations:map[string
]string{io.kubernetes.container.hash: 42a47b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f,PodSandboxId:4fb4b82b20065edcb49c98a7ee285d373ebcf0ea192cb88232862c8e887166f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706647123516304267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0e3c20f03b0f0b3970d7212f3c0b776,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2,PodSandboxId:449d84e5ef66c8dc96ece0f76be94bbb4a99f48e32ea3cde50e251dca3e7a670,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706647123431731822,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8209177f62ae28e095966ad6f0cbbaa0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d,PodSandboxId:74457383bf69a71606341c6e8c2b0a0f1f7a82460f41cc7a2168177a7c019a1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706647123178606099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e99eedee0b4268817b10691671423352,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 94585cf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c8e56aba-f11a-4bae-a3c7-9cd05c07d39d name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.621530646Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=36fb4f1f-f6ee-4ce3-bd48-8707b2603cba name=/runtime.v1.RuntimeService/Version
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.621624130Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=36fb4f1f-f6ee-4ce3-bd48-8707b2603cba name=/runtime.v1.RuntimeService/Version
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.623219555Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=15223be8-0cd6-4d7d-b572-ba82a6147cce name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.623566652Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648290623554623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=15223be8-0cd6-4d7d-b572-ba82a6147cce name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.624546673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a8cf10ce-3978-4e95-806e-2e3c5b745bc9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.624594211Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a8cf10ce-3978-4e95-806e-2e3c5b745bc9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:58:10 embed-certs-208583 crio[720]: time="2024-01-30 20:58:10.624905542Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac,PodSandboxId:ab9925835e346411b26bc8894ec94e416b909be80a6b1d371ffc7c4be7635601,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647161473504325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15108916-a630-4208-99f7-5706db407b22,},Annotations:map[string]string{io.kubernetes.container.hash: 40a6b532,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb4953867b0d06ea86a5c05932a01453ca0ed667a443bdf9ede0606f1821bb9,PodSandboxId:0c8049b581240989535266df9a54a3c8b0139ff64661303bb79927b4e76bf48e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706647140868168618,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 689c9651-345a-43fd-aa34-90f6d5e6af09,},Annotations:map[string]string{io.kubernetes.container.hash: ec603722,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d,PodSandboxId:66b3a844ac9d2844629589a74faba10a47448c961a1e3a1c9f27a470b7ab5f7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706647137891660189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jqzzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59f362b6-606e-4bcd-b5eb-c8822aaf8b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 923a1a71,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5,PodSandboxId:ab9925835e346411b26bc8894ec94e416b909be80a6b1d371ffc7c4be7635601,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706647130198167780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 15108916-a630-4208-99f7-5706db407b22,},Annotations:map[string]string{io.kubernetes.container.hash: 40a6b532,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254,PodSandboxId:b9919313ba9b5930a3c49678e0e22bd83083ba0e16b63fc272fc247d8caa1a6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706647130140498010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g7q5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47f109e0-7a
56-472f-8c7e-ba2b138de352,},Annotations:map[string]string{io.kubernetes.container.hash: 396cdb76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18,PodSandboxId:a84f96548609d7037f5403820927cdeba2fb19dee949b6dc469a39c510bda8f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706647123760271132,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3263ac53d6b91bfa78c53088de606433,},Annotations:map[string
]string{io.kubernetes.container.hash: 42a47b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f,PodSandboxId:4fb4b82b20065edcb49c98a7ee285d373ebcf0ea192cb88232862c8e887166f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706647123516304267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0e3c20f03b0f0b3970d7212f3c0b776,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2,PodSandboxId:449d84e5ef66c8dc96ece0f76be94bbb4a99f48e32ea3cde50e251dca3e7a670,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706647123431731822,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8209177f62ae28e095966ad6f0cbbaa0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d,PodSandboxId:74457383bf69a71606341c6e8c2b0a0f1f7a82460f41cc7a2168177a7c019a1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706647123178606099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-208583,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e99eedee0b4268817b10691671423352,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 94585cf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a8cf10ce-3978-4e95-806e-2e3c5b745bc9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	84ab3bb4fc327       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       3                   ab9925835e346       storage-provisioner
	bdb4953867b0d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   0c8049b581240       busybox
	4c08f1c12145a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      19 minutes ago      Running             coredns                   1                   66b3a844ac9d2       coredns-5dd5756b68-jqzzv
	5dbd1a278b495       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       2                   ab9925835e346       storage-provisioner
	cceda50230a0f       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      19 minutes ago      Running             kube-proxy                1                   b9919313ba9b5       kube-proxy-g7q5t
	0684f62c32df0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      19 minutes ago      Running             etcd                      1                   a84f96548609d       etcd-embed-certs-208583
	74b99df1e69b6       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      19 minutes ago      Running             kube-scheduler            1                   4fb4b82b20065       kube-scheduler-embed-certs-208583
	b53924cf08f0c       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      19 minutes ago      Running             kube-controller-manager   1                   449d84e5ef66c       kube-controller-manager-embed-certs-208583
	f2b510da3b115       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      19 minutes ago      Running             kube-apiserver            1                   74457383bf69a       kube-apiserver-embed-certs-208583
	
	
	==> coredns [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38716 - 44587 "HINFO IN 9201679870384010574.7855106596069656275. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012753039s
	
	
	==> describe nodes <==
	Name:               embed-certs-208583
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-208583
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218
	                    minikube.k8s.io/name=embed-certs-208583
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T20_29_55_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 20:29:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-208583
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 20:58:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 20:54:38 +0000   Tue, 30 Jan 2024 20:29:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 20:54:38 +0000   Tue, 30 Jan 2024 20:29:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 20:54:38 +0000   Tue, 30 Jan 2024 20:29:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 20:54:38 +0000   Tue, 30 Jan 2024 20:38:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.63
	  Hostname:    embed-certs-208583
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 bdb5105259974561b918af369df02796
	  System UUID:                bdb51052-5997-4561-b918-af369df02796
	  Boot ID:                    ab0320e5-8c2d-4df3-b351-d7c99f8ce415
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-5dd5756b68-jqzzv                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-embed-certs-208583                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-embed-certs-208583             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-embed-certs-208583    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-g7q5t                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-embed-certs-208583             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-ghg9n               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node embed-certs-208583 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node embed-certs-208583 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node embed-certs-208583 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                28m                kubelet          Node embed-certs-208583 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node embed-certs-208583 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node embed-certs-208583 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node embed-certs-208583 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-208583 event: Registered Node embed-certs-208583 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node embed-certs-208583 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node embed-certs-208583 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node embed-certs-208583 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-208583 event: Registered Node embed-certs-208583 in Controller
	
	
	==> dmesg <==
	[Jan30 20:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066069] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.339494] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.214283] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +0.145036] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.482974] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.098762] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.116494] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.143886] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.127771] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.222496] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[ +17.333923] systemd-fstab-generator[920]: Ignoring "noauto" for root device
	[ +15.290435] kauditd_printk_skb: 19 callbacks suppressed
	[Jan30 20:39] hrtimer: interrupt took 2691671 ns
	
	
	==> etcd [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18] <==
	{"level":"warn","ts":"2024-01-30T20:38:54.032468Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"420.391952ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:721"}
	{"level":"warn","ts":"2024-01-30T20:38:54.032549Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"312.86655ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-208583\" ","response":"range_response_count:1 size:5677"}
	{"level":"info","ts":"2024-01-30T20:38:54.0326Z","caller":"traceutil/trace.go:171","msg":"trace[661698520] range","detail":"{range_begin:/registry/minions/embed-certs-208583; range_end:; response_count:1; response_revision:572; }","duration":"312.917115ms","start":"2024-01-30T20:38:53.719675Z","end":"2024-01-30T20:38:54.032592Z","steps":["trace[661698520] 'agreement among raft nodes before linearized reading'  (duration: 312.842355ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T20:38:54.032624Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-30T20:38:53.719659Z","time spent":"312.958838ms","remote":"127.0.0.1:35192","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5700,"request content":"key:\"/registry/minions/embed-certs-208583\" "}
	{"level":"warn","ts":"2024-01-30T20:38:54.032739Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"674.791963ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:1 size:992"}
	{"level":"info","ts":"2024-01-30T20:38:54.032945Z","caller":"traceutil/trace.go:171","msg":"trace[1853740171] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:1; response_revision:572; }","duration":"674.939797ms","start":"2024-01-30T20:38:53.357942Z","end":"2024-01-30T20:38:54.032882Z","steps":["trace[1853740171] 'agreement among raft nodes before linearized reading'  (duration: 674.77413ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T20:38:54.032972Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-30T20:38:53.357927Z","time spent":"675.037768ms","remote":"127.0.0.1:35236","response type":"/etcdserverpb.KV/Range","request count":0,"request size":35,"response count":1,"response size":1015,"request content":"key:\"/registry/storageclasses/standard\" "}
	{"level":"info","ts":"2024-01-30T20:38:54.03255Z","caller":"traceutil/trace.go:171","msg":"trace[1336100563] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:572; }","duration":"420.512097ms","start":"2024-01-30T20:38:53.612024Z","end":"2024-01-30T20:38:54.032536Z","steps":["trace[1336100563] 'agreement among raft nodes before linearized reading'  (duration: 420.321252ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T20:38:54.033078Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-30T20:38:53.61201Z","time spent":"421.062331ms","remote":"127.0.0.1:35198","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":1,"response size":744,"request content":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" "}
	{"level":"warn","ts":"2024-01-30T20:38:54.035593Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"411.041529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" ","response":"range_response_count:1 size:2362"}
	{"level":"info","ts":"2024-01-30T20:38:54.035651Z","caller":"traceutil/trace.go:171","msg":"trace[1509354512] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:1; response_revision:572; }","duration":"411.103585ms","start":"2024-01-30T20:38:53.624539Z","end":"2024-01-30T20:38:54.035643Z","steps":["trace[1509354512] 'agreement among raft nodes before linearized reading'  (duration: 408.028832ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T20:38:54.035676Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-30T20:38:53.624523Z","time spent":"411.146567ms","remote":"127.0.0.1:35270","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":2385,"request content":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" "}
	{"level":"warn","ts":"2024-01-30T20:38:54.672398Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"452.239978ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-208583\" ","response":"range_response_count:1 size:5677"}
	{"level":"info","ts":"2024-01-30T20:38:54.672702Z","caller":"traceutil/trace.go:171","msg":"trace[121966225] range","detail":"{range_begin:/registry/minions/embed-certs-208583; range_end:; response_count:1; response_revision:572; }","duration":"452.560412ms","start":"2024-01-30T20:38:54.220123Z","end":"2024-01-30T20:38:54.672684Z","steps":["trace[121966225] 'range keys from in-memory index tree'  (duration: 452.125387ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T20:38:54.672963Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-30T20:38:54.220107Z","time spent":"452.830486ms","remote":"127.0.0.1:35192","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5700,"request content":"key:\"/registry/minions/embed-certs-208583\" "}
	{"level":"info","ts":"2024-01-30T20:39:34.507364Z","caller":"traceutil/trace.go:171","msg":"trace[831219265] linearizableReadLoop","detail":"{readStateIndex:680; appliedIndex:679; }","duration":"224.134605ms","start":"2024-01-30T20:39:34.283196Z","end":"2024-01-30T20:39:34.507331Z","steps":["trace[831219265] 'read index received'  (duration: 205.240065ms)","trace[831219265] 'applied index is now lower than readState.Index'  (duration: 18.89331ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-30T20:39:34.507542Z","caller":"traceutil/trace.go:171","msg":"trace[1371141812] transaction","detail":"{read_only:false; response_revision:633; number_of_response:1; }","duration":"231.2881ms","start":"2024-01-30T20:39:34.276241Z","end":"2024-01-30T20:39:34.507529Z","steps":["trace[1371141812] 'process raft request'  (duration: 212.235642ms)","trace[1371141812] 'compare'  (duration: 18.587527ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-30T20:39:34.507963Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.768116ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-ghg9n\" ","response":"range_response_count:1 size:4026"}
	{"level":"info","ts":"2024-01-30T20:39:34.508032Z","caller":"traceutil/trace.go:171","msg":"trace[53200355] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-ghg9n; range_end:; response_count:1; response_revision:633; }","duration":"224.848006ms","start":"2024-01-30T20:39:34.283177Z","end":"2024-01-30T20:39:34.508025Z","steps":["trace[53200355] 'agreement among raft nodes before linearized reading'  (duration: 224.701048ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T20:48:46.888016Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":851}
	{"level":"info","ts":"2024-01-30T20:48:46.891215Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":851,"took":"2.864276ms","hash":341810038}
	{"level":"info","ts":"2024-01-30T20:48:46.891278Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":341810038,"revision":851,"compact-revision":-1}
	{"level":"info","ts":"2024-01-30T20:53:46.90168Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1093}
	{"level":"info","ts":"2024-01-30T20:53:46.903619Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1093,"took":"1.338176ms","hash":2605762580}
	{"level":"info","ts":"2024-01-30T20:53:46.903708Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2605762580,"revision":1093,"compact-revision":851}
	
	
	==> kernel <==
	 20:58:11 up 20 min,  0 users,  load average: 0.07, 0.09, 0.08
	Linux embed-certs-208583 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d] <==
	W0130 20:53:49.829258       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:53:49.829393       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:53:49.829413       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:53:49.829272       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:53:49.829445       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 20:53:49.830740       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 20:54:48.590426       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 20:54:49.829982       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:54:49.830083       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:54:49.830098       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:54:49.831303       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:54:49.831349       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 20:54:49.831357       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 20:55:48.590049       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0130 20:56:48.590240       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 20:56:49.831210       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:56:49.831361       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:56:49.831434       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:56:49.831420       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:56:49.831504       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 20:56:49.833466       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 20:57:48.589635       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2] <==
	I0130 20:52:32.075441       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:53:01.552560       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:53:02.088007       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:53:31.558213       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:53:32.096667       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:54:01.566367       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:54:02.107070       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:54:31.572341       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:54:32.117290       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0130 20:54:58.251481       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="278.093µs"
	E0130 20:55:01.578381       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:55:02.126299       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0130 20:55:09.246478       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="96.292µs"
	E0130 20:55:31.583170       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:55:32.135312       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:56:01.591114       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:56:02.145216       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:56:31.595957       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:56:32.155913       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:57:01.601293       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:57:02.165431       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:57:31.606445       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:57:32.175166       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:58:01.613589       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:58:02.184680       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254] <==
	I0130 20:38:50.535555       1 server_others.go:69] "Using iptables proxy"
	I0130 20:38:50.555469       1 node.go:141] Successfully retrieved node IP: 192.168.61.63
	I0130 20:38:50.706519       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0130 20:38:50.706586       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0130 20:38:50.712720       1 server_others.go:152] "Using iptables Proxier"
	I0130 20:38:50.712844       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0130 20:38:50.713031       1 server.go:846] "Version info" version="v1.28.4"
	I0130 20:38:50.713091       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 20:38:50.715309       1 config.go:188] "Starting service config controller"
	I0130 20:38:50.715350       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0130 20:38:50.715391       1 config.go:97] "Starting endpoint slice config controller"
	I0130 20:38:50.715395       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0130 20:38:50.715586       1 config.go:315] "Starting node config controller"
	I0130 20:38:50.715592       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0130 20:38:50.818690       1 shared_informer.go:318] Caches are synced for node config
	I0130 20:38:50.818863       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0130 20:38:50.818936       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f] <==
	I0130 20:38:45.611826       1 serving.go:348] Generated self-signed cert in-memory
	W0130 20:38:48.697259       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0130 20:38:48.697415       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 20:38:48.697457       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0130 20:38:48.697488       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0130 20:38:48.858349       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0130 20:38:48.858463       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 20:38:48.862959       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0130 20:38:48.863031       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0130 20:38:48.864077       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0130 20:38:48.864316       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0130 20:38:48.964636       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 20:38:14 UTC, ends at Tue 2024-01-30 20:58:11 UTC. --
	Jan 30 20:55:20 embed-certs-208583 kubelet[926]: E0130 20:55:20.230126     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:55:33 embed-certs-208583 kubelet[926]: E0130 20:55:33.228963     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:55:42 embed-certs-208583 kubelet[926]: E0130 20:55:42.243166     926 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 20:55:42 embed-certs-208583 kubelet[926]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 20:55:42 embed-certs-208583 kubelet[926]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:55:42 embed-certs-208583 kubelet[926]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 20:55:48 embed-certs-208583 kubelet[926]: E0130 20:55:48.229671     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:56:00 embed-certs-208583 kubelet[926]: E0130 20:56:00.230361     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:56:14 embed-certs-208583 kubelet[926]: E0130 20:56:14.229187     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:56:29 embed-certs-208583 kubelet[926]: E0130 20:56:29.228905     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:56:42 embed-certs-208583 kubelet[926]: E0130 20:56:42.228981     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:56:42 embed-certs-208583 kubelet[926]: E0130 20:56:42.243214     926 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 20:56:42 embed-certs-208583 kubelet[926]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 20:56:42 embed-certs-208583 kubelet[926]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:56:42 embed-certs-208583 kubelet[926]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 20:56:54 embed-certs-208583 kubelet[926]: E0130 20:56:54.230137     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:57:09 embed-certs-208583 kubelet[926]: E0130 20:57:09.228713     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:57:24 embed-certs-208583 kubelet[926]: E0130 20:57:24.227989     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:57:37 embed-certs-208583 kubelet[926]: E0130 20:57:37.228575     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:57:42 embed-certs-208583 kubelet[926]: E0130 20:57:42.243154     926 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 20:57:42 embed-certs-208583 kubelet[926]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 20:57:42 embed-certs-208583 kubelet[926]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:57:42 embed-certs-208583 kubelet[926]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 20:57:51 embed-certs-208583 kubelet[926]: E0130 20:57:51.229039     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	Jan 30 20:58:02 embed-certs-208583 kubelet[926]: E0130 20:58:02.229996     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ghg9n" podUID="37700115-83e9-440a-b396-56f50adb6311"
	
	
	==> storage-provisioner [5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5] <==
	I0130 20:38:50.486906       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0130 20:39:20.489282       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac] <==
	I0130 20:39:21.621208       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 20:39:21.642033       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 20:39:21.642222       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 20:39:39.045827       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 20:39:39.045980       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-208583_e5da18ec-ba4a-443b-98b6-d4f3cc1af7e8!
	I0130 20:39:39.047893       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d2a5e740-c445-4dba-b408-fd63b3f21abd", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-208583_e5da18ec-ba4a-443b-98b6-d4f3cc1af7e8 became leader
	I0130 20:39:39.146943       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-208583_e5da18ec-ba4a-443b-98b6-d4f3cc1af7e8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-208583 -n embed-certs-208583
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-208583 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-ghg9n
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-208583 describe pod metrics-server-57f55c9bc5-ghg9n
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-208583 describe pod metrics-server-57f55c9bc5-ghg9n: exit status 1 (65.390829ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-ghg9n" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-208583 describe pod metrics-server-57f55c9bc5-ghg9n: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (352.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (259.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0130 20:53:39.710907   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-473743 -n no-preload-473743
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-30 20:57:56.298913366 +0000 UTC m=+5713.375883143
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-473743 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-473743 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.904µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-473743 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-473743 -n no-preload-473743
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-473743 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-473743 logs -n 25: (1.288437095s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-757744 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | disable-driver-mounts-757744                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:31 UTC |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-473743             | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-473743                                   | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-208583            | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:31 UTC | 30 Jan 24 20:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:31 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-877742  | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:32 UTC | 30 Jan 24 20:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:32 UTC |                     |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-473743                  | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-208583                 | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-473743                                   | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:44 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-150971        | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-877742       | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:34 UTC | 30 Jan 24 20:48 UTC |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-150971             | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:36 UTC | 30 Jan 24 20:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:57 UTC | 30 Jan 24 20:57 UTC |
	| start   | -p newest-cni-564644 --memory=2200 --alsologtostderr   | newest-cni-564644            | jenkins | v1.32.0 | 30 Jan 24 20:57 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 20:57:56
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 20:57:56.385279   50429 out.go:296] Setting OutFile to fd 1 ...
	I0130 20:57:56.385433   50429 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:57:56.385448   50429 out.go:309] Setting ErrFile to fd 2...
	I0130 20:57:56.385463   50429 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:57:56.385718   50429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 20:57:56.386301   50429 out.go:303] Setting JSON to false
	I0130 20:57:56.387260   50429 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6022,"bootTime":1706642255,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 20:57:56.387346   50429 start.go:138] virtualization: kvm guest
	I0130 20:57:56.389722   50429 out.go:177] * [newest-cni-564644] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 20:57:56.391237   50429 out.go:177]   - MINIKUBE_LOCATION=18007
	I0130 20:57:56.392561   50429 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 20:57:56.391306   50429 notify.go:220] Checking for updates...
	I0130 20:57:56.394981   50429 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:57:56.396131   50429 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 20:57:56.397231   50429 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 20:57:56.398270   50429 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 20:57:56.399939   50429 config.go:182] Loaded profile config "default-k8s-diff-port-877742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:57:56.400040   50429 config.go:182] Loaded profile config "embed-certs-208583": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:57:56.400134   50429 config.go:182] Loaded profile config "no-preload-473743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 20:57:56.400218   50429 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 20:57:56.442496   50429 out.go:177] * Using the kvm2 driver based on user configuration
	I0130 20:57:56.443740   50429 start.go:298] selected driver: kvm2
	I0130 20:57:56.443755   50429 start.go:902] validating driver "kvm2" against <nil>
	I0130 20:57:56.443765   50429 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 20:57:56.444417   50429 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 20:57:56.444507   50429 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18007-4458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 20:57:56.460131   50429 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 20:57:56.460174   50429 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0130 20:57:56.460191   50429 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0130 20:57:56.460384   50429 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0130 20:57:56.460407   50429 cni.go:84] Creating CNI manager for ""
	I0130 20:57:56.460416   50429 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:57:56.460423   50429 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0130 20:57:56.460430   50429 start_flags.go:321] config:
	{Name:newest-cni-564644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-564644 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:57:56.460537   50429 iso.go:125] acquiring lock: {Name:mk072ab123730f3058e85a91672f85e887bd47af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 20:57:56.462743   50429 out.go:177] * Starting control plane node newest-cni-564644 in cluster newest-cni-564644
	I0130 20:57:56.463771   50429 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 20:57:56.463799   50429 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0130 20:57:56.463833   50429 cache.go:56] Caching tarball of preloaded images
	I0130 20:57:56.463907   50429 preload.go:174] Found /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 20:57:56.463921   50429 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0130 20:57:56.463999   50429 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/newest-cni-564644/config.json ...
	I0130 20:57:56.464014   50429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/newest-cni-564644/config.json: {Name:mk2d3e3a10672c72457474c1c0518d65fa7c0875 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:57:56.464133   50429 start.go:365] acquiring machines lock for newest-cni-564644: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 20:57:56.464159   50429 start.go:369] acquired machines lock for "newest-cni-564644" in 13.536µs
	I0130 20:57:56.464171   50429 start.go:93] Provisioning new machine with config: &{Name:newest-cni-564644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-564644 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:57:56.464223   50429 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 20:39:19 UTC, ends at Tue 2024-01-30 20:57:57 UTC. --
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.039066873Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648277039052876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=ce305e95-cd10-4989-8ddf-0c667cd17fc8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.039880694Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7e9a490f-f42d-4c9d-b66c-747f4175a3d0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.039925113Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7e9a490f-f42d-4c9d-b66c-747f4175a3d0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.040265847Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0,PodSandboxId:d4e4e386d23faec07722b81d92baaa13efa13c229bfaffe1133538f6ecead0d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706647239580537453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a257b079-cb6e-45fd-b05d-9ad6fa26225e,},Annotations:map[string]string{io.kubernetes.container.hash: 206f44ea,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfced8166a62235ba8bd8f0e9ca8b9e9f0091b8c09a3c98cd911949b909a9c1,PodSandboxId:a5442d98c12249d2769c781d5742a3a9c767e3c98ce51408824252f8aeba62d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706647219937616575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76483155-3957-4487-a0a8-7c5511ea5fe4,},Annotations:map[string]string{io.kubernetes.container.hash: e27fea54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c,PodSandboxId:2017da92eac5d4134322e2004f54a0fdd411e91da80cdb0b8389fdb2939bf97f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706647216685352196,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d4c7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8701b4d-0616-4c05-9ba0-0157adae2d13,},Annotations:map[string]string{io.kubernetes.container.hash: 6e86a31,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689,PodSandboxId:14211c17a6df66ba5c4755ea6a1e75792b91164fb9fe5ca98be81375944c9f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706647209233271888,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zklzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa94d19c-b0
d6-4e78-86e8-e6b5f3608753,},Annotations:map[string]string{io.kubernetes.container.hash: bf8c81b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446,PodSandboxId:d4e4e386d23faec07722b81d92baaa13efa13c229bfaffe1133538f6ecead0d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1706647209212984933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a257b079-cb6e
-45fd-b05d-9ad6fa26225e,},Annotations:map[string]string{io.kubernetes.container.hash: 206f44ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79,PodSandboxId:c59ff4568d34bc5d105303772e883a0753cf4d6af6f33195eff49cccbcf1bdf7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1706647202959064062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a114725bb58f16fe05b4
0766dfd675a2,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901,PodSandboxId:9f5658af0abd0f8fa497b88b01ec774c36f40a91077d023cdeb081102e38d3c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706647202816601449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c77b1d2fc69e7744c0b3663b58046a,},Annotations:map[string]string{io.kub
ernetes.container.hash: d2a09030,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f,PodSandboxId:29e36a5e06e206529459dcca763c0e35d28f1f59e0ae964c029b4b3e41299293,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706647202673676774,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b849b15baa44349c67e242be9c74523,},Annotations
:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e,PodSandboxId:aac3d267dd8222b3a9325c82f372f6ca00aa95918efdb03fd4e186bc1a0317ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706647202353223356,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0488c96715580f546d9b840aeeef0809,},Annotations:map[string
]string{io.kubernetes.container.hash: 91e80384,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7e9a490f-f42d-4c9d-b66c-747f4175a3d0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.079424245Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=39f8d0ca-90b4-4554-8979-a7f0ac0ef7f0 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.079478801Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=39f8d0ca-90b4-4554-8979-a7f0ac0ef7f0 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.080817355Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8dffd2fa-b27e-4789-95f0-dbe8adfd3724 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.081130463Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648277081113127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=8dffd2fa-b27e-4789-95f0-dbe8adfd3724 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.081826806Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1eca6c3d-ea91-46ba-ba41-504a5a0bea27 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.081872902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1eca6c3d-ea91-46ba-ba41-504a5a0bea27 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.082486126Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0,PodSandboxId:d4e4e386d23faec07722b81d92baaa13efa13c229bfaffe1133538f6ecead0d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706647239580537453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a257b079-cb6e-45fd-b05d-9ad6fa26225e,},Annotations:map[string]string{io.kubernetes.container.hash: 206f44ea,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfced8166a62235ba8bd8f0e9ca8b9e9f0091b8c09a3c98cd911949b909a9c1,PodSandboxId:a5442d98c12249d2769c781d5742a3a9c767e3c98ce51408824252f8aeba62d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706647219937616575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76483155-3957-4487-a0a8-7c5511ea5fe4,},Annotations:map[string]string{io.kubernetes.container.hash: e27fea54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c,PodSandboxId:2017da92eac5d4134322e2004f54a0fdd411e91da80cdb0b8389fdb2939bf97f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706647216685352196,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d4c7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8701b4d-0616-4c05-9ba0-0157adae2d13,},Annotations:map[string]string{io.kubernetes.container.hash: 6e86a31,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689,PodSandboxId:14211c17a6df66ba5c4755ea6a1e75792b91164fb9fe5ca98be81375944c9f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706647209233271888,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zklzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa94d19c-b0
d6-4e78-86e8-e6b5f3608753,},Annotations:map[string]string{io.kubernetes.container.hash: bf8c81b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446,PodSandboxId:d4e4e386d23faec07722b81d92baaa13efa13c229bfaffe1133538f6ecead0d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1706647209212984933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a257b079-cb6e
-45fd-b05d-9ad6fa26225e,},Annotations:map[string]string{io.kubernetes.container.hash: 206f44ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79,PodSandboxId:c59ff4568d34bc5d105303772e883a0753cf4d6af6f33195eff49cccbcf1bdf7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1706647202959064062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a114725bb58f16fe05b4
0766dfd675a2,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901,PodSandboxId:9f5658af0abd0f8fa497b88b01ec774c36f40a91077d023cdeb081102e38d3c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706647202816601449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c77b1d2fc69e7744c0b3663b58046a,},Annotations:map[string]string{io.kub
ernetes.container.hash: d2a09030,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f,PodSandboxId:29e36a5e06e206529459dcca763c0e35d28f1f59e0ae964c029b4b3e41299293,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706647202673676774,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b849b15baa44349c67e242be9c74523,},Annotations
:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e,PodSandboxId:aac3d267dd8222b3a9325c82f372f6ca00aa95918efdb03fd4e186bc1a0317ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706647202353223356,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0488c96715580f546d9b840aeeef0809,},Annotations:map[string
]string{io.kubernetes.container.hash: 91e80384,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1eca6c3d-ea91-46ba-ba41-504a5a0bea27 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.126402129Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d7d47774-fe39-480d-82df-25bca6c4c717 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.126484437Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d7d47774-fe39-480d-82df-25bca6c4c717 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.127634740Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=47e55f55-55c5-4b79-aebb-40779e0a6b27 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.128072959Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648277128057163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=47e55f55-55c5-4b79-aebb-40779e0a6b27 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.128606558Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7d886d3e-da59-4d17-8862-0bb4f7e2312f name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.128691025Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7d886d3e-da59-4d17-8862-0bb4f7e2312f name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.129036495Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0,PodSandboxId:d4e4e386d23faec07722b81d92baaa13efa13c229bfaffe1133538f6ecead0d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706647239580537453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a257b079-cb6e-45fd-b05d-9ad6fa26225e,},Annotations:map[string]string{io.kubernetes.container.hash: 206f44ea,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfced8166a62235ba8bd8f0e9ca8b9e9f0091b8c09a3c98cd911949b909a9c1,PodSandboxId:a5442d98c12249d2769c781d5742a3a9c767e3c98ce51408824252f8aeba62d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706647219937616575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76483155-3957-4487-a0a8-7c5511ea5fe4,},Annotations:map[string]string{io.kubernetes.container.hash: e27fea54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c,PodSandboxId:2017da92eac5d4134322e2004f54a0fdd411e91da80cdb0b8389fdb2939bf97f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706647216685352196,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d4c7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8701b4d-0616-4c05-9ba0-0157adae2d13,},Annotations:map[string]string{io.kubernetes.container.hash: 6e86a31,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689,PodSandboxId:14211c17a6df66ba5c4755ea6a1e75792b91164fb9fe5ca98be81375944c9f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706647209233271888,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zklzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa94d19c-b0
d6-4e78-86e8-e6b5f3608753,},Annotations:map[string]string{io.kubernetes.container.hash: bf8c81b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446,PodSandboxId:d4e4e386d23faec07722b81d92baaa13efa13c229bfaffe1133538f6ecead0d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1706647209212984933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a257b079-cb6e
-45fd-b05d-9ad6fa26225e,},Annotations:map[string]string{io.kubernetes.container.hash: 206f44ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79,PodSandboxId:c59ff4568d34bc5d105303772e883a0753cf4d6af6f33195eff49cccbcf1bdf7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1706647202959064062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a114725bb58f16fe05b4
0766dfd675a2,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901,PodSandboxId:9f5658af0abd0f8fa497b88b01ec774c36f40a91077d023cdeb081102e38d3c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706647202816601449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c77b1d2fc69e7744c0b3663b58046a,},Annotations:map[string]string{io.kub
ernetes.container.hash: d2a09030,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f,PodSandboxId:29e36a5e06e206529459dcca763c0e35d28f1f59e0ae964c029b4b3e41299293,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706647202673676774,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b849b15baa44349c67e242be9c74523,},Annotations
:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e,PodSandboxId:aac3d267dd8222b3a9325c82f372f6ca00aa95918efdb03fd4e186bc1a0317ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706647202353223356,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0488c96715580f546d9b840aeeef0809,},Annotations:map[string
]string{io.kubernetes.container.hash: 91e80384,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7d886d3e-da59-4d17-8862-0bb4f7e2312f name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.166102354Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c9ed497a-2875-4f10-b133-2f2774503bc7 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.166166843Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c9ed497a-2875-4f10-b133-2f2774503bc7 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.167253720Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=34bae790-f37a-4a9c-b1fc-65409c5678d7 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.167568916Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648277167557931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=34bae790-f37a-4a9c-b1fc-65409c5678d7 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.168206015Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a2c10d8c-92b9-486c-9963-3641acbf3d89 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.168251217Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a2c10d8c-92b9-486c-9963-3641acbf3d89 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:57 no-preload-473743 crio[718]: time="2024-01-30 20:57:57.168544802Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0,PodSandboxId:d4e4e386d23faec07722b81d92baaa13efa13c229bfaffe1133538f6ecead0d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706647239580537453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a257b079-cb6e-45fd-b05d-9ad6fa26225e,},Annotations:map[string]string{io.kubernetes.container.hash: 206f44ea,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfced8166a62235ba8bd8f0e9ca8b9e9f0091b8c09a3c98cd911949b909a9c1,PodSandboxId:a5442d98c12249d2769c781d5742a3a9c767e3c98ce51408824252f8aeba62d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706647219937616575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76483155-3957-4487-a0a8-7c5511ea5fe4,},Annotations:map[string]string{io.kubernetes.container.hash: e27fea54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c,PodSandboxId:2017da92eac5d4134322e2004f54a0fdd411e91da80cdb0b8389fdb2939bf97f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706647216685352196,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d4c7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8701b4d-0616-4c05-9ba0-0157adae2d13,},Annotations:map[string]string{io.kubernetes.container.hash: 6e86a31,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689,PodSandboxId:14211c17a6df66ba5c4755ea6a1e75792b91164fb9fe5ca98be81375944c9f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706647209233271888,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zklzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa94d19c-b0
d6-4e78-86e8-e6b5f3608753,},Annotations:map[string]string{io.kubernetes.container.hash: bf8c81b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446,PodSandboxId:d4e4e386d23faec07722b81d92baaa13efa13c229bfaffe1133538f6ecead0d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1706647209212984933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a257b079-cb6e
-45fd-b05d-9ad6fa26225e,},Annotations:map[string]string{io.kubernetes.container.hash: 206f44ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79,PodSandboxId:c59ff4568d34bc5d105303772e883a0753cf4d6af6f33195eff49cccbcf1bdf7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1706647202959064062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a114725bb58f16fe05b4
0766dfd675a2,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901,PodSandboxId:9f5658af0abd0f8fa497b88b01ec774c36f40a91077d023cdeb081102e38d3c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706647202816601449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c77b1d2fc69e7744c0b3663b58046a,},Annotations:map[string]string{io.kub
ernetes.container.hash: d2a09030,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f,PodSandboxId:29e36a5e06e206529459dcca763c0e35d28f1f59e0ae964c029b4b3e41299293,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706647202673676774,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b849b15baa44349c67e242be9c74523,},Annotations
:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e,PodSandboxId:aac3d267dd8222b3a9325c82f372f6ca00aa95918efdb03fd4e186bc1a0317ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706647202353223356,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-473743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0488c96715580f546d9b840aeeef0809,},Annotations:map[string
]string{io.kubernetes.container.hash: 91e80384,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a2c10d8c-92b9-486c-9963-3641acbf3d89 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e690d53fe9ae6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Running             storage-provisioner       3                   d4e4e386d23fa       storage-provisioner
	bdfced8166a62       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   17 minutes ago      Running             busybox                   1                   a5442d98c1224       busybox
	3d08fb7c4f0e5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Running             coredns                   1                   2017da92eac5d       coredns-76f75df574-d4c7t
	880f1c6b663c7       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      17 minutes ago      Running             kube-proxy                1                   14211c17a6df6       kube-proxy-zklzt
	748483279e2b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Exited              storage-provisioner       2                   d4e4e386d23fa       storage-provisioner
	39917caad7f3b       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      17 minutes ago      Running             kube-scheduler            1                   c59ff4568d34b       kube-scheduler-no-preload-473743
	b6d8d2bbf972c       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      17 minutes ago      Running             etcd                      1                   9f5658af0abd0       etcd-no-preload-473743
	10fb0450f95ed       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      17 minutes ago      Running             kube-controller-manager   1                   29e36a5e06e20       kube-controller-manager-no-preload-473743
	ac5dbd0849de6       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      17 minutes ago      Running             kube-apiserver            1                   aac3d267dd822       kube-apiserver-no-preload-473743
	
	
	==> coredns [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52340 - 62464 "HINFO IN 288902453189497013.5229750491074800888. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014583491s
	
	
	==> describe nodes <==
	Name:               no-preload-473743
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-473743
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218
	                    minikube.k8s.io/name=no-preload-473743
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T20_29_44_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 20:29:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-473743
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 20:57:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 20:55:56 +0000   Tue, 30 Jan 2024 20:29:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 20:55:56 +0000   Tue, 30 Jan 2024 20:29:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 20:55:56 +0000   Tue, 30 Jan 2024 20:29:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 20:55:56 +0000   Tue, 30 Jan 2024 20:40:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.220
	  Hostname:    no-preload-473743
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 9a382c357ad5489ab98c79e836d3de29
	  System UUID:                9a382c35-7ad5-489a-b98c-79e836d3de29
	  Boot ID:                    708ff03a-910b-4ccf-ad1e-a0814598f511
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-76f75df574-d4c7t                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-473743                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-473743             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-473743    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-zklzt                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-473743             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-wzb2g              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-473743 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-473743 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-473743 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                28m                kubelet          Node no-preload-473743 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-473743 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-473743 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-473743 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-473743 event: Registered Node no-preload-473743 in Controller
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node no-preload-473743 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node no-preload-473743 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node no-preload-473743 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m                node-controller  Node no-preload-473743 event: Registered Node no-preload-473743 in Controller
	
	
	==> dmesg <==
	[Jan30 20:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071657] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779061] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.501980] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.158670] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.779353] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.198430] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.116740] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.137142] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.099664] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.216915] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[Jan30 20:40] systemd-fstab-generator[1328]: Ignoring "noauto" for root device
	[ +15.101318] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901] <==
	{"level":"info","ts":"2024-01-30T20:40:04.800344Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-30T20:40:04.800686Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"685a0398c95469a9","initial-advertise-peer-urls":["https://192.168.50.220:2380"],"listen-peer-urls":["https://192.168.50.220:2380"],"advertise-client-urls":["https://192.168.50.220:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.220:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-30T20:40:04.800853Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-30T20:40:04.800982Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.220:2380"}
	{"level":"info","ts":"2024-01-30T20:40:04.801007Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.220:2380"}
	{"level":"info","ts":"2024-01-30T20:40:06.484214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"685a0398c95469a9 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-30T20:40:06.484383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"685a0398c95469a9 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-30T20:40:06.484507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"685a0398c95469a9 received MsgPreVoteResp from 685a0398c95469a9 at term 2"}
	{"level":"info","ts":"2024-01-30T20:40:06.484614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"685a0398c95469a9 became candidate at term 3"}
	{"level":"info","ts":"2024-01-30T20:40:06.484671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"685a0398c95469a9 received MsgVoteResp from 685a0398c95469a9 at term 3"}
	{"level":"info","ts":"2024-01-30T20:40:06.484705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"685a0398c95469a9 became leader at term 3"}
	{"level":"info","ts":"2024-01-30T20:40:06.48491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 685a0398c95469a9 elected leader 685a0398c95469a9 at term 3"}
	{"level":"info","ts":"2024-01-30T20:40:06.486678Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"685a0398c95469a9","local-member-attributes":"{Name:no-preload-473743 ClientURLs:[https://192.168.50.220:2379]}","request-path":"/0/members/685a0398c95469a9/attributes","cluster-id":"cc509ba192cc331e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-30T20:40:06.486753Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T20:40:06.486889Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T20:40:06.487013Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-30T20:40:06.487428Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-30T20:40:06.48909Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.220:2379"}
	{"level":"info","ts":"2024-01-30T20:40:06.489221Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-30T20:50:06.523097Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":860}
	{"level":"info","ts":"2024-01-30T20:50:06.526516Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":860,"took":"2.765338ms","hash":1731629767}
	{"level":"info","ts":"2024-01-30T20:50:06.526597Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1731629767,"revision":860,"compact-revision":-1}
	{"level":"info","ts":"2024-01-30T20:55:06.534706Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1102}
	{"level":"info","ts":"2024-01-30T20:55:06.536719Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1102,"took":"1.296543ms","hash":2891797918}
	{"level":"info","ts":"2024-01-30T20:55:06.537535Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2891797918,"revision":1102,"compact-revision":860}
	
	
	==> kernel <==
	 20:57:57 up 18 min,  0 users,  load average: 0.05, 0.12, 0.10
	Linux no-preload-473743 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e] <==
	I0130 20:51:08.974740       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:53:08.974068       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:53:08.974149       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 20:53:08.974158       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:53:08.975212       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:53:08.975321       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:53:08.975363       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:55:07.979284       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:55:07.979426       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0130 20:55:08.980227       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:55:08.980313       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 20:55:08.980324       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:55:08.980438       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:55:08.980656       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:55:08.981496       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:56:08.980570       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:56:08.980938       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 20:56:08.980979       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:56:08.982063       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:56:08.982127       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:56:08.982135       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f] <==
	I0130 20:52:21.508695       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:52:51.000194       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:52:51.517668       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:53:21.006316       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:53:21.526478       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:53:51.014267       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:53:51.535683       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:54:21.022721       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:54:21.547387       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:54:51.030345       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:54:51.555481       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:55:21.035664       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:55:21.564970       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:55:51.043456       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:55:51.574695       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:56:21.052223       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:56:21.586042       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0130 20:56:22.412005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="360.713µs"
	I0130 20:56:33.406432       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="547.812µs"
	E0130 20:56:51.057426       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:56:51.594196       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:57:21.062655       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:57:21.602548       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:57:51.070346       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:57:51.612129       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689] <==
	I0130 20:40:09.561642       1 server_others.go:72] "Using iptables proxy"
	I0130 20:40:09.584008       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.220"]
	I0130 20:40:09.713026       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0130 20:40:09.713083       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0130 20:40:09.713099       1 server_others.go:168] "Using iptables Proxier"
	I0130 20:40:09.717048       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0130 20:40:09.717241       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0130 20:40:09.717280       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 20:40:09.722953       1 config.go:315] "Starting node config controller"
	I0130 20:40:09.722993       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0130 20:40:09.723349       1 config.go:188] "Starting service config controller"
	I0130 20:40:09.723356       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0130 20:40:09.723374       1 config.go:97] "Starting endpoint slice config controller"
	I0130 20:40:09.723377       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0130 20:40:09.823620       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0130 20:40:09.823750       1 shared_informer.go:318] Caches are synced for service config
	I0130 20:40:09.823754       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79] <==
	I0130 20:40:05.244107       1 serving.go:380] Generated self-signed cert in-memory
	W0130 20:40:07.881976       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0130 20:40:07.882129       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 20:40:07.882162       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0130 20:40:07.882268       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0130 20:40:07.975394       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0130 20:40:07.975534       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 20:40:07.987553       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0130 20:40:07.987613       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0130 20:40:07.988046       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0130 20:40:07.988122       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0130 20:40:08.088882       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 20:39:19 UTC, ends at Tue 2024-01-30 20:57:57 UTC. --
	Jan 30 20:55:03 no-preload-473743 kubelet[1334]: E0130 20:55:03.386269    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:55:16 no-preload-473743 kubelet[1334]: E0130 20:55:16.385138    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:55:28 no-preload-473743 kubelet[1334]: E0130 20:55:28.385169    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:55:41 no-preload-473743 kubelet[1334]: E0130 20:55:41.384921    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:55:56 no-preload-473743 kubelet[1334]: E0130 20:55:56.385988    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:56:01 no-preload-473743 kubelet[1334]: E0130 20:56:01.510043    1334 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 20:56:01 no-preload-473743 kubelet[1334]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 20:56:01 no-preload-473743 kubelet[1334]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:56:01 no-preload-473743 kubelet[1334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 20:56:08 no-preload-473743 kubelet[1334]: E0130 20:56:08.406560    1334 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 30 20:56:08 no-preload-473743 kubelet[1334]: E0130 20:56:08.406607    1334 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 30 20:56:08 no-preload-473743 kubelet[1334]: E0130 20:56:08.406869    1334 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-b8492,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-wzb2g_kube-system(cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 30 20:56:08 no-preload-473743 kubelet[1334]: E0130 20:56:08.406922    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:56:22 no-preload-473743 kubelet[1334]: E0130 20:56:22.392256    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:56:33 no-preload-473743 kubelet[1334]: E0130 20:56:33.386618    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:56:48 no-preload-473743 kubelet[1334]: E0130 20:56:48.385560    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:56:59 no-preload-473743 kubelet[1334]: E0130 20:56:59.386078    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:57:01 no-preload-473743 kubelet[1334]: E0130 20:57:01.508327    1334 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 20:57:01 no-preload-473743 kubelet[1334]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 20:57:01 no-preload-473743 kubelet[1334]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:57:01 no-preload-473743 kubelet[1334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 20:57:10 no-preload-473743 kubelet[1334]: E0130 20:57:10.384850    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:57:22 no-preload-473743 kubelet[1334]: E0130 20:57:22.385370    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:57:33 no-preload-473743 kubelet[1334]: E0130 20:57:33.385487    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	Jan 30 20:57:46 no-preload-473743 kubelet[1334]: E0130 20:57:46.385106    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wzb2g" podUID="cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3"
	
	
	==> storage-provisioner [748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446] <==
	I0130 20:40:09.484963       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0130 20:40:39.500627       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0] <==
	I0130 20:40:39.698207       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 20:40:39.707654       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 20:40:39.707737       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 20:40:57.112638       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 20:40:57.112859       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-473743_f3fb3cca-9c04-49f9-ad5d-0674c5b889ec!
	I0130 20:40:57.112948       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"726cd493-9a17-4202-977a-c6967814510c", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-473743_f3fb3cca-9c04-49f9-ad5d-0674c5b889ec became leader
	I0130 20:40:57.213930       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-473743_f3fb3cca-9c04-49f9-ad5d-0674c5b889ec!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-473743 -n no-preload-473743
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-473743 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-wzb2g
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-473743 describe pod metrics-server-57f55c9bc5-wzb2g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-473743 describe pod metrics-server-57f55c9bc5-wzb2g: exit status 1 (77.315951ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-wzb2g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-473743 describe pod metrics-server-57f55c9bc5-wzb2g: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (259.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (138.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0130 20:56:31.181678   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-150971 -n old-k8s-version-150971
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-30 20:57:42.539222192 +0000 UTC m=+5699.616191965
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-150971 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-150971 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.512µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-150971 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-150971 -n old-k8s-version-150971
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-150971 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-150971 logs -n 25: (1.610010268s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:28 UTC | 30 Jan 24 20:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:28 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| pause   | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-922110                                        | pause-922110                 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-757744 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | disable-driver-mounts-757744                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:31 UTC |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-473743             | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC | 30 Jan 24 20:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-473743                                   | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-208583            | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:31 UTC | 30 Jan 24 20:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:31 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-877742  | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:32 UTC | 30 Jan 24 20:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:32 UTC |                     |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-473743                  | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-208583                 | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-473743                                   | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:44 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-150971        | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-877742       | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:34 UTC | 30 Jan 24 20:48 UTC |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-150971             | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:36 UTC | 30 Jan 24 20:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 20:36:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 20:36:09.643751   45819 out.go:296] Setting OutFile to fd 1 ...
	I0130 20:36:09.644027   45819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:36:09.644038   45819 out.go:309] Setting ErrFile to fd 2...
	I0130 20:36:09.644045   45819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:36:09.644230   45819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 20:36:09.644766   45819 out.go:303] Setting JSON to false
	I0130 20:36:09.645668   45819 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4715,"bootTime":1706642255,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 20:36:09.645727   45819 start.go:138] virtualization: kvm guest
	I0130 20:36:09.648102   45819 out.go:177] * [old-k8s-version-150971] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 20:36:09.649772   45819 out.go:177]   - MINIKUBE_LOCATION=18007
	I0130 20:36:09.651000   45819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 20:36:09.649826   45819 notify.go:220] Checking for updates...
	I0130 20:36:09.653462   45819 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:36:09.654761   45819 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 20:36:09.655939   45819 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 20:36:09.657140   45819 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 20:36:09.658638   45819 config.go:182] Loaded profile config "old-k8s-version-150971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 20:36:09.659027   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:36:09.659066   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:36:09.672985   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39323
	I0130 20:36:09.673381   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:36:09.673876   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:36:09.673897   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:36:09.674191   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:36:09.674351   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:36:09.676038   45819 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0130 20:36:09.677315   45819 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 20:36:09.677582   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:36:09.677630   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:36:09.691259   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0130 20:36:09.691604   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:36:09.692060   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:36:09.692089   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:36:09.692371   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:36:09.692555   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:36:09.726172   45819 out.go:177] * Using the kvm2 driver based on existing profile
	I0130 20:36:09.727421   45819 start.go:298] selected driver: kvm2
	I0130 20:36:09.727433   45819 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-150971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:36:09.727546   45819 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 20:36:09.728186   45819 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 20:36:09.728255   45819 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18007-4458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 20:36:09.742395   45819 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 20:36:09.742715   45819 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 20:36:09.742771   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:36:09.742784   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:36:09.742794   45819 start_flags.go:321] config:
	{Name:old-k8s-version-150971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:36:09.742977   45819 iso.go:125] acquiring lock: {Name:mk072ab123730f3058e85a91672f85e887bd47af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 20:36:09.745577   45819 out.go:177] * Starting control plane node old-k8s-version-150971 in cluster old-k8s-version-150971
	I0130 20:36:10.483495   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:09.746820   45819 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 20:36:09.746852   45819 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0130 20:36:09.746865   45819 cache.go:56] Caching tarball of preloaded images
	I0130 20:36:09.746951   45819 preload.go:174] Found /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 20:36:09.746960   45819 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0130 20:36:09.747061   45819 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/config.json ...
	I0130 20:36:09.747229   45819 start.go:365] acquiring machines lock for old-k8s-version-150971: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 20:36:13.555547   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:19.635533   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:22.707498   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:28.787473   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:31.859544   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:37.939524   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:41.011456   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:47.091510   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:50.163505   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:56.243497   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:36:59.315474   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:05.395536   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:08.467514   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:14.547517   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:17.619561   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:23.699509   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:26.771568   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:32.851483   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:35.923502   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:42.003515   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:45.075526   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:51.155512   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:37:54.227514   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:38:00.307532   44923 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.220:22: connect: no route to host
	I0130 20:38:03.311451   45037 start.go:369] acquired machines lock for "embed-certs-208583" in 4m29.471089592s
	I0130 20:38:03.311507   45037 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:38:03.311514   45037 fix.go:54] fixHost starting: 
	I0130 20:38:03.311893   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:03.311933   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:03.326477   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0130 20:38:03.326949   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:03.327373   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:03.327403   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:03.327758   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:03.327946   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:03.328115   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:03.329604   45037 fix.go:102] recreateIfNeeded on embed-certs-208583: state=Stopped err=<nil>
	I0130 20:38:03.329646   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	W0130 20:38:03.329810   45037 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:38:03.331493   45037 out.go:177] * Restarting existing kvm2 VM for "embed-certs-208583" ...
	I0130 20:38:03.332735   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Start
	I0130 20:38:03.332862   45037 main.go:141] libmachine: (embed-certs-208583) Ensuring networks are active...
	I0130 20:38:03.333514   45037 main.go:141] libmachine: (embed-certs-208583) Ensuring network default is active
	I0130 20:38:03.333859   45037 main.go:141] libmachine: (embed-certs-208583) Ensuring network mk-embed-certs-208583 is active
	I0130 20:38:03.334154   45037 main.go:141] libmachine: (embed-certs-208583) Getting domain xml...
	I0130 20:38:03.334860   45037 main.go:141] libmachine: (embed-certs-208583) Creating domain...
	I0130 20:38:03.309254   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:38:03.309293   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:38:03.311318   44923 machine.go:91] provisioned docker machine in 4m37.382925036s
	I0130 20:38:03.311359   44923 fix.go:56] fixHost completed within 4m37.403399512s
	I0130 20:38:03.311364   44923 start.go:83] releasing machines lock for "no-preload-473743", held for 4m37.403435936s
	W0130 20:38:03.311387   44923 start.go:694] error starting host: provision: host is not running
	W0130 20:38:03.311504   44923 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0130 20:38:03.311518   44923 start.go:709] Will try again in 5 seconds ...
	I0130 20:38:04.507963   45037 main.go:141] libmachine: (embed-certs-208583) Waiting to get IP...
	I0130 20:38:04.508755   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:04.509133   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:04.509207   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:04.509115   46132 retry.go:31] will retry after 189.527185ms: waiting for machine to come up
	I0130 20:38:04.700560   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:04.701193   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:04.701223   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:04.701137   46132 retry.go:31] will retry after 239.29825ms: waiting for machine to come up
	I0130 20:38:04.941612   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:04.942080   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:04.942116   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:04.942040   46132 retry.go:31] will retry after 388.672579ms: waiting for machine to come up
	I0130 20:38:05.332617   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:05.333018   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:05.333041   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:05.332968   46132 retry.go:31] will retry after 525.5543ms: waiting for machine to come up
	I0130 20:38:05.859677   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:05.860094   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:05.860126   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:05.860055   46132 retry.go:31] will retry after 595.87535ms: waiting for machine to come up
	I0130 20:38:06.457828   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:06.458220   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:06.458244   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:06.458197   46132 retry.go:31] will retry after 766.148522ms: waiting for machine to come up
	I0130 20:38:07.226151   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:07.226615   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:07.226652   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:07.226558   46132 retry.go:31] will retry after 843.449223ms: waiting for machine to come up
	I0130 20:38:08.070983   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:08.071381   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:08.071407   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:08.071338   46132 retry.go:31] will retry after 1.079839146s: waiting for machine to come up
	I0130 20:38:08.313897   44923 start.go:365] acquiring machines lock for no-preload-473743: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 20:38:09.152768   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:09.153087   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:09.153113   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:09.153034   46132 retry.go:31] will retry after 1.855245571s: waiting for machine to come up
	I0130 20:38:11.010893   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:11.011260   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:11.011299   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:11.011196   46132 retry.go:31] will retry after 2.159062372s: waiting for machine to come up
	I0130 20:38:13.172734   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:13.173144   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:13.173173   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:13.173106   46132 retry.go:31] will retry after 2.73165804s: waiting for machine to come up
	I0130 20:38:15.908382   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:15.908803   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:15.908834   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:15.908732   46132 retry.go:31] will retry after 3.268718285s: waiting for machine to come up
	I0130 20:38:23.603972   45441 start.go:369] acquired machines lock for "default-k8s-diff-port-877742" in 3m48.064811183s
	I0130 20:38:23.604051   45441 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:38:23.604061   45441 fix.go:54] fixHost starting: 
	I0130 20:38:23.604420   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:23.604456   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:23.620189   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34493
	I0130 20:38:23.620538   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:23.621035   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:38:23.621073   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:23.621415   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:23.621584   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:23.621739   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:38:23.623158   45441 fix.go:102] recreateIfNeeded on default-k8s-diff-port-877742: state=Stopped err=<nil>
	I0130 20:38:23.623185   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	W0130 20:38:23.623382   45441 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:38:23.625974   45441 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-877742" ...
	I0130 20:38:19.178930   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:19.179358   45037 main.go:141] libmachine: (embed-certs-208583) DBG | unable to find current IP address of domain embed-certs-208583 in network mk-embed-certs-208583
	I0130 20:38:19.179389   45037 main.go:141] libmachine: (embed-certs-208583) DBG | I0130 20:38:19.179300   46132 retry.go:31] will retry after 3.117969425s: waiting for machine to come up
	I0130 20:38:22.300539   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.300957   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has current primary IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.300982   45037 main.go:141] libmachine: (embed-certs-208583) Found IP for machine: 192.168.61.63
	I0130 20:38:22.300997   45037 main.go:141] libmachine: (embed-certs-208583) Reserving static IP address...
	I0130 20:38:22.301371   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "embed-certs-208583", mac: "52:54:00:43:f2:e1", ip: "192.168.61.63"} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.301395   45037 main.go:141] libmachine: (embed-certs-208583) Reserved static IP address: 192.168.61.63
	I0130 20:38:22.301409   45037 main.go:141] libmachine: (embed-certs-208583) DBG | skip adding static IP to network mk-embed-certs-208583 - found existing host DHCP lease matching {name: "embed-certs-208583", mac: "52:54:00:43:f2:e1", ip: "192.168.61.63"}
	I0130 20:38:22.301420   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Getting to WaitForSSH function...
	I0130 20:38:22.301436   45037 main.go:141] libmachine: (embed-certs-208583) Waiting for SSH to be available...
	I0130 20:38:22.303472   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.303820   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.303842   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.303968   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Using SSH client type: external
	I0130 20:38:22.304011   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa (-rw-------)
	I0130 20:38:22.304042   45037 main.go:141] libmachine: (embed-certs-208583) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:38:22.304052   45037 main.go:141] libmachine: (embed-certs-208583) DBG | About to run SSH command:
	I0130 20:38:22.304065   45037 main.go:141] libmachine: (embed-certs-208583) DBG | exit 0
	I0130 20:38:22.398610   45037 main.go:141] libmachine: (embed-certs-208583) DBG | SSH cmd err, output: <nil>: 
	I0130 20:38:22.398945   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetConfigRaw
	I0130 20:38:22.399605   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:22.402157   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.402531   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.402569   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.402759   45037 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/config.json ...
	I0130 20:38:22.402974   45037 machine.go:88] provisioning docker machine ...
	I0130 20:38:22.402999   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:22.403238   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetMachineName
	I0130 20:38:22.403440   45037 buildroot.go:166] provisioning hostname "embed-certs-208583"
	I0130 20:38:22.403462   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetMachineName
	I0130 20:38:22.403642   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.405694   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.406026   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.406055   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.406180   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:22.406429   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.406599   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.406734   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:22.406904   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:22.407422   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:22.407446   45037 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208583 && echo "embed-certs-208583" | sudo tee /etc/hostname
	I0130 20:38:22.548206   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208583
	
	I0130 20:38:22.548240   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.550933   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.551316   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.551345   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.551492   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:22.551690   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.551821   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.551934   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:22.552129   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:22.552425   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:22.552441   45037 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208583' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208583/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208583' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:38:22.687464   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:38:22.687491   45037 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:38:22.687536   45037 buildroot.go:174] setting up certificates
	I0130 20:38:22.687551   45037 provision.go:83] configureAuth start
	I0130 20:38:22.687562   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetMachineName
	I0130 20:38:22.687813   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:22.690307   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.690664   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.690686   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.690855   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.693139   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.693426   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.693462   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.693597   45037 provision.go:138] copyHostCerts
	I0130 20:38:22.693667   45037 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:38:22.693686   45037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:38:22.693766   45037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:38:22.693866   45037 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:38:22.693876   45037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:38:22.693912   45037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:38:22.693986   45037 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:38:22.693997   45037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:38:22.694036   45037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:38:22.694122   45037 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208583 san=[192.168.61.63 192.168.61.63 localhost 127.0.0.1 minikube embed-certs-208583]
	I0130 20:38:22.862847   45037 provision.go:172] copyRemoteCerts
	I0130 20:38:22.862902   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:38:22.862921   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:22.865533   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.865812   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:22.865842   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:22.866006   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:22.866200   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:22.866315   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:22.866496   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:22.959746   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:38:22.982164   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 20:38:23.004087   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 20:38:23.025875   45037 provision.go:86] duration metric: configureAuth took 338.306374ms
	I0130 20:38:23.025896   45037 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:38:23.026090   45037 config.go:182] Loaded profile config "embed-certs-208583": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:38:23.026173   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.028688   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.028913   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.028946   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.029125   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.029277   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.029430   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.029550   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.029679   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:23.029980   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:23.029995   45037 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:38:23.337986   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:38:23.338008   45037 machine.go:91] provisioned docker machine in 935.018208ms
	I0130 20:38:23.338016   45037 start.go:300] post-start starting for "embed-certs-208583" (driver="kvm2")
	I0130 20:38:23.338026   45037 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:38:23.338051   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.338301   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:38:23.338327   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.341005   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.341398   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.341429   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.341516   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.341686   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.341825   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.341997   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:23.437500   45037 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:38:23.441705   45037 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:38:23.441724   45037 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:38:23.441784   45037 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:38:23.441851   45037 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:38:23.441937   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:38:23.450700   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:23.471898   45037 start.go:303] post-start completed in 133.870929ms
	I0130 20:38:23.471916   45037 fix.go:56] fixHost completed within 20.160401625s
	I0130 20:38:23.471940   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.474341   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.474659   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.474695   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.474793   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.474984   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.475181   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.475341   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.475515   45037 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:23.475878   45037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0130 20:38:23.475891   45037 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:38:23.603819   45037 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647103.552984334
	
	I0130 20:38:23.603841   45037 fix.go:206] guest clock: 1706647103.552984334
	I0130 20:38:23.603848   45037 fix.go:219] Guest: 2024-01-30 20:38:23.552984334 +0000 UTC Remote: 2024-01-30 20:38:23.471920461 +0000 UTC m=+289.780929635 (delta=81.063873ms)
	I0130 20:38:23.603879   45037 fix.go:190] guest clock delta is within tolerance: 81.063873ms
	I0130 20:38:23.603885   45037 start.go:83] releasing machines lock for "embed-certs-208583", held for 20.292396099s
	I0130 20:38:23.603916   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.604168   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:23.606681   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.607027   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.607060   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.607190   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.607703   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.607876   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:23.607947   45037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:38:23.607999   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.608115   45037 ssh_runner.go:195] Run: cat /version.json
	I0130 20:38:23.608140   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:23.610693   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611052   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.611078   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611154   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611199   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.611380   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.611530   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.611585   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:23.611625   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:23.611666   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:23.611790   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:23.611935   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:23.612081   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:23.612197   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:23.725868   45037 ssh_runner.go:195] Run: systemctl --version
	I0130 20:38:23.731516   45037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:38:23.872093   45037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:38:23.878418   45037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:38:23.878493   45037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:38:23.892910   45037 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:38:23.892934   45037 start.go:475] detecting cgroup driver to use...
	I0130 20:38:23.893007   45037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:38:23.905950   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:38:23.917437   45037 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:38:23.917484   45037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:38:23.929241   45037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:38:23.940979   45037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:38:24.045106   45037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:38:24.160413   45037 docker.go:233] disabling docker service ...
	I0130 20:38:24.160486   45037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:38:24.173684   45037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:38:24.185484   45037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:38:24.308292   45037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:38:24.430021   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:38:24.442910   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:38:24.460145   45037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:38:24.460211   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.469163   45037 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:38:24.469225   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.478396   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.487374   45037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:24.496306   45037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:38:24.505283   45037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:38:24.512919   45037 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:38:24.512974   45037 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:38:24.523939   45037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:38:24.533002   45037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:38:24.665917   45037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:38:24.839797   45037 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:38:24.839866   45037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:38:24.851397   45037 start.go:543] Will wait 60s for crictl version
	I0130 20:38:24.851454   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:38:24.855227   45037 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:38:24.888083   45037 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:38:24.888163   45037 ssh_runner.go:195] Run: crio --version
	I0130 20:38:24.934626   45037 ssh_runner.go:195] Run: crio --version
	I0130 20:38:24.984233   45037 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 20:38:23.627365   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Start
	I0130 20:38:23.627532   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Ensuring networks are active...
	I0130 20:38:23.628247   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Ensuring network default is active
	I0130 20:38:23.628650   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Ensuring network mk-default-k8s-diff-port-877742 is active
	I0130 20:38:23.629109   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Getting domain xml...
	I0130 20:38:23.629715   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Creating domain...
	I0130 20:38:24.849156   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting to get IP...
	I0130 20:38:24.850261   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:24.850701   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:24.850729   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:24.850645   46249 retry.go:31] will retry after 259.328149ms: waiting for machine to come up
	I0130 20:38:25.112451   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.112941   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.112971   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:25.112905   46249 retry.go:31] will retry after 283.994822ms: waiting for machine to come up
	I0130 20:38:25.398452   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.398937   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.398968   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:25.398904   46249 retry.go:31] will retry after 348.958329ms: waiting for machine to come up
	I0130 20:38:24.985681   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetIP
	I0130 20:38:24.988666   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:24.989016   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:24.989042   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:24.989288   45037 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0130 20:38:24.993626   45037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:25.005749   45037 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 20:38:25.005817   45037 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:25.047605   45037 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 20:38:25.047674   45037 ssh_runner.go:195] Run: which lz4
	I0130 20:38:25.051662   45037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 20:38:25.055817   45037 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:38:25.055849   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 20:38:26.895244   45037 crio.go:444] Took 1.843605 seconds to copy over tarball
	I0130 20:38:26.895332   45037 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 20:38:25.749560   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.750020   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:25.750048   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:25.749985   46249 retry.go:31] will retry after 597.656366ms: waiting for machine to come up
	I0130 20:38:26.349518   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.349957   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.350004   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:26.349929   46249 retry.go:31] will retry after 600.926171ms: waiting for machine to come up
	I0130 20:38:26.952713   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.953319   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:26.953343   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:26.953276   46249 retry.go:31] will retry after 654.976543ms: waiting for machine to come up
	I0130 20:38:27.610017   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:27.610464   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:27.610494   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:27.610413   46249 retry.go:31] will retry after 881.075627ms: waiting for machine to come up
	I0130 20:38:28.493641   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:28.494188   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:28.494218   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:28.494136   46249 retry.go:31] will retry after 1.436302447s: waiting for machine to come up
	I0130 20:38:29.932271   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:29.932794   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:29.932825   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:29.932729   46249 retry.go:31] will retry after 1.394659615s: waiting for machine to come up
	I0130 20:38:29.834721   45037 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.939351369s)
	I0130 20:38:29.834746   45037 crio.go:451] Took 2.939470 seconds to extract the tarball
	I0130 20:38:29.834754   45037 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 20:38:29.875618   45037 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:29.921569   45037 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 20:38:29.921593   45037 cache_images.go:84] Images are preloaded, skipping loading
	I0130 20:38:29.921661   45037 ssh_runner.go:195] Run: crio config
	I0130 20:38:29.981565   45037 cni.go:84] Creating CNI manager for ""
	I0130 20:38:29.981590   45037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:38:29.981612   45037 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:38:29.981637   45037 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.63 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-208583 NodeName:embed-certs-208583 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:38:29.981824   45037 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-208583"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:38:29.981919   45037 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-208583 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-208583 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:38:29.981984   45037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 20:38:29.991601   45037 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:38:29.991665   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:38:30.000815   45037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0130 20:38:30.016616   45037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:38:30.032999   45037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0130 20:38:30.052735   45037 ssh_runner.go:195] Run: grep 192.168.61.63	control-plane.minikube.internal$ /etc/hosts
	I0130 20:38:30.057008   45037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:30.069968   45037 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583 for IP: 192.168.61.63
	I0130 20:38:30.070004   45037 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:30.070164   45037 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:38:30.070201   45037 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:38:30.070263   45037 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/client.key
	I0130 20:38:30.070323   45037 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/apiserver.key.9879da99
	I0130 20:38:30.070370   45037 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/proxy-client.key
	I0130 20:38:30.070496   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:38:30.070531   45037 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:38:30.070541   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:38:30.070561   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:38:30.070586   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:38:30.070612   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:38:30.070659   45037 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:30.071211   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:38:30.098665   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 20:38:30.125013   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:38:30.150013   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/embed-certs-208583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 20:38:30.177206   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:38:30.202683   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:38:30.225774   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:38:30.249090   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:38:30.274681   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:38:30.302316   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:38:30.326602   45037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:38:30.351136   45037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:38:30.368709   45037 ssh_runner.go:195] Run: openssl version
	I0130 20:38:30.374606   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:38:30.386421   45037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:38:30.391240   45037 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:38:30.391314   45037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:38:30.397082   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:38:30.409040   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:38:30.420910   45037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:30.425929   45037 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:30.425971   45037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:30.431609   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:38:30.443527   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:38:30.455200   45037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:38:30.460242   45037 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:38:30.460307   45037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:38:30.466225   45037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:38:30.479406   45037 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:38:30.485331   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:38:30.493468   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:38:30.499465   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:38:30.505394   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:38:30.511152   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:38:30.516951   45037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:38:30.522596   45037 kubeadm.go:404] StartCluster: {Name:embed-certs-208583 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-208583 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.63 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:38:30.522698   45037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:38:30.522747   45037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:38:30.559669   45037 cri.go:89] found id: ""
	I0130 20:38:30.559740   45037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:38:30.571465   45037 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:38:30.571487   45037 kubeadm.go:636] restartCluster start
	I0130 20:38:30.571539   45037 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:38:30.581398   45037 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:30.582366   45037 kubeconfig.go:92] found "embed-certs-208583" server: "https://192.168.61.63:8443"
	I0130 20:38:30.584719   45037 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:38:30.593986   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:30.594031   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:30.606926   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:31.094476   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:31.094545   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:31.106991   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:31.594553   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:31.594633   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:31.607554   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:32.094029   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:32.094114   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:32.107447   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:32.594998   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:32.595079   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:32.607929   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:33.094468   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:33.094562   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:33.111525   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:33.594502   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:33.594578   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:33.611216   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:31.329366   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:31.329720   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:31.329739   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:31.329672   46249 retry.go:31] will retry after 1.8606556s: waiting for machine to come up
	I0130 20:38:33.192538   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:33.192916   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:33.192938   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:33.192873   46249 retry.go:31] will retry after 2.294307307s: waiting for machine to come up
	I0130 20:38:34.094151   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:34.094223   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:34.106531   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:34.594098   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:34.594172   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:34.606286   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:35.094891   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:35.094995   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:35.106949   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:35.594452   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:35.594532   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:35.611066   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:36.094606   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:36.094684   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:36.110348   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:36.595021   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:36.595084   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:36.609884   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:37.094347   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:37.094445   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:37.106709   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:37.594248   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:37.594348   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:37.610367   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:38.095063   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:38.095141   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:38.107195   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:38.594024   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:38.594139   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:38.606041   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:35.489701   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:35.490129   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:35.490166   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:35.490071   46249 retry.go:31] will retry after 2.434575636s: waiting for machine to come up
	I0130 20:38:37.927709   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:37.928168   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:37.928198   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:37.928111   46249 retry.go:31] will retry after 3.073200884s: waiting for machine to come up
	I0130 20:38:39.094490   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:39.094572   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:39.106154   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:39.594866   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:39.594961   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:39.606937   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:40.094464   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:40.094549   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:40.106068   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:40.594556   45037 api_server.go:166] Checking apiserver status ...
	I0130 20:38:40.594637   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:40.606499   45037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:40.606523   45037 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:38:40.606544   45037 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:38:40.606554   45037 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:38:40.606605   45037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:38:40.646444   45037 cri.go:89] found id: ""
	I0130 20:38:40.646505   45037 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:38:40.661886   45037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:38:40.670948   45037 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:38:40.671008   45037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:38:40.679749   45037 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:38:40.679771   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:40.780597   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:41.804175   45037 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.023537725s)
	I0130 20:38:41.804214   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:41.999624   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:42.103064   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:42.173522   45037 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:38:42.173628   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:42.674417   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:43.173996   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:43.674137   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:41.004686   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:41.005140   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | unable to find current IP address of domain default-k8s-diff-port-877742 in network mk-default-k8s-diff-port-877742
	I0130 20:38:41.005165   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | I0130 20:38:41.005085   46249 retry.go:31] will retry after 3.766414086s: waiting for machine to come up
	I0130 20:38:44.773568   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.774049   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Found IP for machine: 192.168.72.52
	I0130 20:38:44.774082   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has current primary IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.774099   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Reserving static IP address...
	I0130 20:38:44.774494   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-877742", mac: "52:54:00:c4:e0:0b", ip: "192.168.72.52"} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.774517   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Reserved static IP address: 192.168.72.52
	I0130 20:38:44.774543   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | skip adding static IP to network mk-default-k8s-diff-port-877742 - found existing host DHCP lease matching {name: "default-k8s-diff-port-877742", mac: "52:54:00:c4:e0:0b", ip: "192.168.72.52"}
	I0130 20:38:44.774561   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Waiting for SSH to be available...
	I0130 20:38:44.774589   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Getting to WaitForSSH function...
	I0130 20:38:44.776761   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.777079   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.777114   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.777210   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Using SSH client type: external
	I0130 20:38:44.777242   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa (-rw-------)
	I0130 20:38:44.777299   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:38:44.777332   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | About to run SSH command:
	I0130 20:38:44.777352   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | exit 0
	I0130 20:38:44.875219   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | SSH cmd err, output: <nil>: 
	I0130 20:38:44.875515   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetConfigRaw
	I0130 20:38:44.876243   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:44.878633   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.879035   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.879069   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.879336   45441 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/config.json ...
	I0130 20:38:44.879504   45441 machine.go:88] provisioning docker machine ...
	I0130 20:38:44.879522   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:44.879734   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetMachineName
	I0130 20:38:44.879889   45441 buildroot.go:166] provisioning hostname "default-k8s-diff-port-877742"
	I0130 20:38:44.879932   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetMachineName
	I0130 20:38:44.880102   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:44.882426   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.882753   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:44.882777   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:44.882927   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:44.883099   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:44.883246   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:44.883409   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:44.883569   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:44.884066   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:44.884092   45441 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-877742 && echo "default-k8s-diff-port-877742" | sudo tee /etc/hostname
	I0130 20:38:45.030801   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-877742
	
	I0130 20:38:45.030847   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.033532   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.033897   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.033955   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.034094   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.034309   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.034489   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.034644   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.034826   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:45.035168   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:45.035187   45441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-877742' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-877742/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-877742' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:38:45.175807   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:38:45.175849   45441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:38:45.175884   45441 buildroot.go:174] setting up certificates
	I0130 20:38:45.175907   45441 provision.go:83] configureAuth start
	I0130 20:38:45.175923   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetMachineName
	I0130 20:38:45.176200   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:45.179102   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.179489   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.179526   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.179664   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.182178   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.182532   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.182560   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.182666   45441 provision.go:138] copyHostCerts
	I0130 20:38:45.182716   45441 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:38:45.182728   45441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:38:45.182788   45441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:38:45.182895   45441 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:38:45.182910   45441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:38:45.182973   45441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:38:45.183054   45441 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:38:45.183065   45441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:38:45.183090   45441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:38:45.183158   45441 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-877742 san=[192.168.72.52 192.168.72.52 localhost 127.0.0.1 minikube default-k8s-diff-port-877742]
	I0130 20:38:45.352895   45441 provision.go:172] copyRemoteCerts
	I0130 20:38:45.352960   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:38:45.352986   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.355820   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.356141   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.356169   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.356343   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.356540   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.356717   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.356868   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:46.136084   45819 start.go:369] acquired machines lock for "old-k8s-version-150971" in 2m36.388823473s
	I0130 20:38:46.136157   45819 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:38:46.136169   45819 fix.go:54] fixHost starting: 
	I0130 20:38:46.136624   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:46.136669   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:46.153210   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33685
	I0130 20:38:46.153604   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:46.154080   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:38:46.154104   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:46.154422   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:46.154630   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:38:46.154771   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:38:46.156388   45819 fix.go:102] recreateIfNeeded on old-k8s-version-150971: state=Stopped err=<nil>
	I0130 20:38:46.156420   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	W0130 20:38:46.156613   45819 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:38:46.158388   45819 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-150971" ...
	I0130 20:38:45.456511   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:38:45.483324   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0130 20:38:45.510567   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 20:38:45.535387   45441 provision.go:86] duration metric: configureAuth took 359.467243ms
	I0130 20:38:45.535421   45441 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:38:45.535659   45441 config.go:182] Loaded profile config "default-k8s-diff-port-877742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:38:45.535749   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.538712   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.539176   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.539214   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.539334   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.539574   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.539741   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.539995   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.540244   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:45.540770   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:45.540796   45441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:38:45.877778   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:38:45.877813   45441 machine.go:91] provisioned docker machine in 998.294632ms
	I0130 20:38:45.877825   45441 start.go:300] post-start starting for "default-k8s-diff-port-877742" (driver="kvm2")
	I0130 20:38:45.877845   45441 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:38:45.877869   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:45.878190   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:38:45.878224   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:45.881167   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.881533   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:45.881566   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:45.881704   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:45.881880   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:45.882064   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:45.882207   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:45.972932   45441 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:38:45.977412   45441 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:38:45.977437   45441 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:38:45.977514   45441 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:38:45.977593   45441 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:38:45.977694   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:38:45.985843   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:46.008484   45441 start.go:303] post-start completed in 130.643321ms
	I0130 20:38:46.008509   45441 fix.go:56] fixHost completed within 22.404447995s
	I0130 20:38:46.008533   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:46.011463   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.011901   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.011944   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.012088   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:46.012304   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.012500   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.012647   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:46.012803   45441 main.go:141] libmachine: Using SSH client type: native
	I0130 20:38:46.013202   45441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0130 20:38:46.013226   45441 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:38:46.135930   45441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647126.077813825
	
	I0130 20:38:46.135955   45441 fix.go:206] guest clock: 1706647126.077813825
	I0130 20:38:46.135965   45441 fix.go:219] Guest: 2024-01-30 20:38:46.077813825 +0000 UTC Remote: 2024-01-30 20:38:46.008513384 +0000 UTC m=+250.621109629 (delta=69.300441ms)
	I0130 20:38:46.135988   45441 fix.go:190] guest clock delta is within tolerance: 69.300441ms
	I0130 20:38:46.135993   45441 start.go:83] releasing machines lock for "default-k8s-diff-port-877742", held for 22.53196506s
	I0130 20:38:46.136021   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.136315   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:46.139211   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.139549   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.139581   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.139695   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.140243   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.140427   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:38:46.140507   45441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:38:46.140555   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:46.140639   45441 ssh_runner.go:195] Run: cat /version.json
	I0130 20:38:46.140661   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:38:46.143348   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.143614   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.143651   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.143675   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.143843   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:46.144027   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.144081   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:46.144110   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:46.144228   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:38:46.144253   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:46.144434   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:38:46.144434   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:46.144580   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:38:46.144707   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:38:46.241499   45441 ssh_runner.go:195] Run: systemctl --version
	I0130 20:38:46.264180   45441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:38:46.417654   45441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:38:46.423377   45441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:38:46.423450   45441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:38:46.439524   45441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:38:46.439549   45441 start.go:475] detecting cgroup driver to use...
	I0130 20:38:46.439612   45441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:38:46.456668   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:38:46.469494   45441 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:38:46.469547   45441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:38:46.482422   45441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:38:46.496031   45441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:38:46.601598   45441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:38:46.710564   45441 docker.go:233] disabling docker service ...
	I0130 20:38:46.710633   45441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:38:46.724084   45441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:38:46.736019   45441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:38:46.853310   45441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:38:46.976197   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:38:46.991033   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:38:47.009961   45441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:38:47.010028   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.019749   45441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:38:47.019822   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.032215   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.043642   45441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:38:47.056005   45441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:38:47.068954   45441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:38:47.079752   45441 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:38:47.079823   45441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:38:47.096106   45441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:38:47.109074   45441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:38:47.243783   45441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:38:47.468971   45441 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:38:47.469055   45441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:38:47.474571   45441 start.go:543] Will wait 60s for crictl version
	I0130 20:38:47.474646   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:38:47.479007   45441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:38:47.525155   45441 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:38:47.525259   45441 ssh_runner.go:195] Run: crio --version
	I0130 20:38:47.582308   45441 ssh_runner.go:195] Run: crio --version
	I0130 20:38:47.648689   45441 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 20:38:44.173930   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:38:44.197493   45037 api_server.go:72] duration metric: took 2.023971316s to wait for apiserver process to appear ...
	I0130 20:38:44.197522   45037 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:38:44.197545   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:44.198089   45037 api_server.go:269] stopped: https://192.168.61.63:8443/healthz: Get "https://192.168.61.63:8443/healthz": dial tcp 192.168.61.63:8443: connect: connection refused
	I0130 20:38:44.697622   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:48.683401   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:38:48.683435   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:38:48.683452   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:46.159722   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Start
	I0130 20:38:46.159892   45819 main.go:141] libmachine: (old-k8s-version-150971) Ensuring networks are active...
	I0130 20:38:46.160650   45819 main.go:141] libmachine: (old-k8s-version-150971) Ensuring network default is active
	I0130 20:38:46.160960   45819 main.go:141] libmachine: (old-k8s-version-150971) Ensuring network mk-old-k8s-version-150971 is active
	I0130 20:38:46.161374   45819 main.go:141] libmachine: (old-k8s-version-150971) Getting domain xml...
	I0130 20:38:46.162142   45819 main.go:141] libmachine: (old-k8s-version-150971) Creating domain...
	I0130 20:38:47.490526   45819 main.go:141] libmachine: (old-k8s-version-150971) Waiting to get IP...
	I0130 20:38:47.491491   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:47.491971   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:47.492059   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:47.491949   46425 retry.go:31] will retry after 201.906522ms: waiting for machine to come up
	I0130 20:38:47.695709   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:47.696195   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:47.696226   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:47.696146   46425 retry.go:31] will retry after 347.547284ms: waiting for machine to come up
	I0130 20:38:48.045541   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:48.046078   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:48.046102   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:48.046013   46425 retry.go:31] will retry after 373.23424ms: waiting for machine to come up
	I0130 20:38:48.420618   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:48.421238   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:48.421263   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:48.421188   46425 retry.go:31] will retry after 515.166265ms: waiting for machine to come up
	I0130 20:38:48.937713   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:48.942554   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:48.942581   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:48.942448   46425 retry.go:31] will retry after 626.563548ms: waiting for machine to come up
	I0130 20:38:49.570078   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:49.570658   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:49.570689   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:49.570550   46425 retry.go:31] will retry after 618.022034ms: waiting for machine to come up
	I0130 20:38:48.786797   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:38:48.786825   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:38:48.786848   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:48.837579   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:38:48.837608   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:38:49.198568   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:49.206091   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:38:49.206135   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:38:49.697669   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:49.707878   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:38:49.707912   45037 api_server.go:103] status: https://192.168.61.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:38:50.198039   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:38:50.209003   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 200:
	ok
	I0130 20:38:50.228887   45037 api_server.go:141] control plane version: v1.28.4
	I0130 20:38:50.228967   45037 api_server.go:131] duration metric: took 6.031436808s to wait for apiserver health ...
	I0130 20:38:50.228981   45037 cni.go:84] Creating CNI manager for ""
	I0130 20:38:50.228991   45037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:38:50.230543   45037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:38:47.649943   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetIP
	I0130 20:38:47.653185   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:47.653623   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:38:47.653664   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:38:47.653933   45441 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0130 20:38:47.659385   45441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:47.675851   45441 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 20:38:47.675918   45441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:47.724799   45441 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 20:38:47.724883   45441 ssh_runner.go:195] Run: which lz4
	I0130 20:38:47.729563   45441 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 20:38:47.735015   45441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:38:47.735048   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 20:38:49.612191   45441 crio.go:444] Took 1.882668 seconds to copy over tarball
	I0130 20:38:49.612263   45441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 20:38:50.231895   45037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:38:50.262363   45037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:38:50.290525   45037 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:38:50.307654   45037 system_pods.go:59] 8 kube-system pods found
	I0130 20:38:50.307696   45037 system_pods.go:61] "coredns-5dd5756b68-jqzzv" [59f362b6-606e-4bcd-b5eb-c8822aaf8b9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:38:50.307708   45037 system_pods.go:61] "etcd-embed-certs-208583" [798094bf-2aac-4f39-afc1-4f873bdd08ee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 20:38:50.307721   45037 system_pods.go:61] "kube-apiserver-embed-certs-208583" [b96b9f6e-b36a-47bf-8f6d-01f883501766] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 20:38:50.307736   45037 system_pods.go:61] "kube-controller-manager-embed-certs-208583" [3dbd9e29-5c95-40f5-acd8-9767f6ce7a03] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 20:38:50.307751   45037 system_pods.go:61] "kube-proxy-g7q5t" [47f109e0-7a56-472f-8c7e-ba2b138de352] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 20:38:50.307760   45037 system_pods.go:61] "kube-scheduler-embed-certs-208583" [e8a37eb1-599f-478f-bbc1-b44b1020f291] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 20:38:50.307769   45037 system_pods.go:61] "metrics-server-57f55c9bc5-ghg9n" [37700115-83e9-440a-b396-56f50adb6311] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:38:50.307788   45037 system_pods.go:61] "storage-provisioner" [15108916-a630-4208-99f7-5706db407b22] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:38:50.307810   45037 system_pods.go:74] duration metric: took 17.261001ms to wait for pod list to return data ...
	I0130 20:38:50.307820   45037 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:38:50.317889   45037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:38:50.317926   45037 node_conditions.go:123] node cpu capacity is 2
	I0130 20:38:50.317939   45037 node_conditions.go:105] duration metric: took 10.11037ms to run NodePressure ...
	I0130 20:38:50.317960   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:38:50.681835   45037 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:38:50.688460   45037 kubeadm.go:787] kubelet initialised
	I0130 20:38:50.688488   45037 kubeadm.go:788] duration metric: took 6.61921ms waiting for restarted kubelet to initialise ...
	I0130 20:38:50.688498   45037 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:38:50.696051   45037 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.703680   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.703713   45037 pod_ready.go:81] duration metric: took 7.634057ms waiting for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.703724   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.703739   45037 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.710192   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "etcd-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.710216   45037 pod_ready.go:81] duration metric: took 6.467699ms waiting for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.710227   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "etcd-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.710235   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.720866   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.720894   45037 pod_ready.go:81] duration metric: took 10.648867ms waiting for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.720906   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.720914   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:50.731095   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.731162   45037 pod_ready.go:81] duration metric: took 10.237453ms waiting for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:50.731181   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:50.731190   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:51.097357   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-proxy-g7q5t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.097391   45037 pod_ready.go:81] duration metric: took 366.190232ms waiting for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:51.097404   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-proxy-g7q5t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.097413   45037 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:51.499223   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.499261   45037 pod_ready.go:81] duration metric: took 401.839475ms waiting for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:51.499293   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.499303   45037 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:51.895725   45037 pod_ready.go:97] node "embed-certs-208583" hosting pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.895779   45037 pod_ready.go:81] duration metric: took 396.460908ms waiting for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	E0130 20:38:51.895798   45037 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-208583" hosting pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:51.895811   45037 pod_ready.go:38] duration metric: took 1.207302604s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:38:51.895836   45037 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:38:51.909431   45037 ops.go:34] apiserver oom_adj: -16
	I0130 20:38:51.909454   45037 kubeadm.go:640] restartCluster took 21.337960534s
	I0130 20:38:51.909472   45037 kubeadm.go:406] StartCluster complete in 21.386877314s
	I0130 20:38:51.909491   45037 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:51.909571   45037 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:38:51.911558   45037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:51.911793   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:38:51.911888   45037 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:38:51.911974   45037 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-208583"
	I0130 20:38:51.911995   45037 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-208583"
	W0130 20:38:51.912007   45037 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:38:51.912044   45037 config.go:182] Loaded profile config "embed-certs-208583": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:38:51.912101   45037 host.go:66] Checking if "embed-certs-208583" exists ...
	I0130 20:38:51.912138   45037 addons.go:69] Setting default-storageclass=true in profile "embed-certs-208583"
	I0130 20:38:51.912168   45037 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-208583"
	I0130 20:38:51.912131   45037 addons.go:69] Setting metrics-server=true in profile "embed-certs-208583"
	I0130 20:38:51.912238   45037 addons.go:234] Setting addon metrics-server=true in "embed-certs-208583"
	W0130 20:38:51.912250   45037 addons.go:243] addon metrics-server should already be in state true
	I0130 20:38:51.912328   45037 host.go:66] Checking if "embed-certs-208583" exists ...
	I0130 20:38:51.912537   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.912561   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.912583   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.912603   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.912686   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.912711   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.923647   45037 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-208583" context rescaled to 1 replicas
	I0130 20:38:51.923691   45037 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.63 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:38:51.926120   45037 out.go:177] * Verifying Kubernetes components...
	I0130 20:38:51.929413   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:38:51.930498   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I0130 20:38:51.930911   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0130 20:38:51.931075   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.931580   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.931988   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.932001   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.932296   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.932730   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.932756   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.933221   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.933273   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.933917   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.934492   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.934524   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.936079   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42667
	I0130 20:38:51.936488   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.937121   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.937144   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.937525   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.937703   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.941576   45037 addons.go:234] Setting addon default-storageclass=true in "embed-certs-208583"
	W0130 20:38:51.941597   45037 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:38:51.941623   45037 host.go:66] Checking if "embed-certs-208583" exists ...
	I0130 20:38:51.942033   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.942072   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.953268   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44577
	I0130 20:38:51.953715   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43785
	I0130 20:38:51.953863   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.954633   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.954659   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.954742   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.955212   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.955233   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.955318   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.955530   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.955663   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.955853   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.957839   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:51.958080   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:51.960896   45037 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:38:51.961493   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37549
	I0130 20:38:51.962677   45037 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:38:51.962838   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:38:51.964463   45037 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:38:51.964487   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:38:51.964518   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:51.964486   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:38:51.964554   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:51.963107   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.965261   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.965274   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.965656   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.966482   45037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:38:51.966520   45037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:38:51.968651   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.969034   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:51.969062   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.969307   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:51.969493   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:51.969580   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.969656   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:51.969809   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:51.970328   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:51.970372   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:51.970391   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.970521   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:51.970706   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:51.970866   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:51.985009   45037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33297
	I0130 20:38:51.985512   45037 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:38:51.986096   45037 main.go:141] libmachine: Using API Version  1
	I0130 20:38:51.986119   45037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:38:51.986558   45037 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:38:51.986778   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetState
	I0130 20:38:51.988698   45037 main.go:141] libmachine: (embed-certs-208583) Calling .DriverName
	I0130 20:38:51.991566   45037 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:38:51.991620   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:38:51.991647   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHHostname
	I0130 20:38:51.994416   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.995367   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHPort
	I0130 20:38:51.995370   45037 main.go:141] libmachine: (embed-certs-208583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:f2:e1", ip: ""} in network mk-embed-certs-208583: {Iface:virbr3 ExpiryTime:2024-01-30 21:38:15 +0000 UTC Type:0 Mac:52:54:00:43:f2:e1 Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:embed-certs-208583 Clientid:01:52:54:00:43:f2:e1}
	I0130 20:38:51.995439   45037 main.go:141] libmachine: (embed-certs-208583) DBG | domain embed-certs-208583 has defined IP address 192.168.61.63 and MAC address 52:54:00:43:f2:e1 in network mk-embed-certs-208583
	I0130 20:38:51.995585   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHKeyPath
	I0130 20:38:51.995740   45037 main.go:141] libmachine: (embed-certs-208583) Calling .GetSSHUsername
	I0130 20:38:51.995885   45037 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/embed-certs-208583/id_rsa Username:docker}
	I0130 20:38:52.125074   45037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:38:52.140756   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:38:52.140787   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:38:52.180728   45037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:38:52.195559   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:38:52.195587   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:38:52.235770   45037 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0130 20:38:52.235779   45037 node_ready.go:35] waiting up to 6m0s for node "embed-certs-208583" to be "Ready" ...
	I0130 20:38:52.243414   45037 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:38:52.243444   45037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:38:52.349604   45037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:38:54.111857   45037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.931041237s)
	I0130 20:38:54.111916   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.111938   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112013   45037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.986903299s)
	I0130 20:38:54.112051   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.112065   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112337   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112383   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112398   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.112403   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112411   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.112421   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.112426   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112434   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.112423   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112450   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.112653   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112728   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112748   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.112770   45037 main.go:141] libmachine: (embed-certs-208583) DBG | Closing plugin on server side
	I0130 20:38:54.112797   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.112806   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.119872   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.119893   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.120118   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.120138   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.121373   45037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.771724991s)
	I0130 20:38:54.121408   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.121421   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.121619   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.121636   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.121647   45037 main.go:141] libmachine: Making call to close driver server
	I0130 20:38:54.121655   45037 main.go:141] libmachine: (embed-certs-208583) Calling .Close
	I0130 20:38:54.121837   45037 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:38:54.121853   45037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:38:54.121875   45037 addons.go:470] Verifying addon metrics-server=true in "embed-certs-208583"
	I0130 20:38:54.332655   45037 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 20:38:50.189837   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:50.190326   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:50.190352   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:50.190273   46425 retry.go:31] will retry after 843.505616ms: waiting for machine to come up
	I0130 20:38:51.035080   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:51.035482   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:51.035511   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:51.035454   46425 retry.go:31] will retry after 1.230675294s: waiting for machine to come up
	I0130 20:38:52.267754   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:52.268342   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:52.268365   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:52.268298   46425 retry.go:31] will retry after 1.516187998s: waiting for machine to come up
	I0130 20:38:53.785734   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:53.786142   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:53.786173   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:53.786084   46425 retry.go:31] will retry after 2.020274977s: waiting for machine to come up
	I0130 20:38:53.002777   45441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.390479779s)
	I0130 20:38:53.002812   45441 crio.go:451] Took 3.390595 seconds to extract the tarball
	I0130 20:38:53.002824   45441 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 20:38:53.059131   45441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:38:53.121737   45441 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 20:38:53.121765   45441 cache_images.go:84] Images are preloaded, skipping loading
	I0130 20:38:53.121837   45441 ssh_runner.go:195] Run: crio config
	I0130 20:38:53.187904   45441 cni.go:84] Creating CNI manager for ""
	I0130 20:38:53.187931   45441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:38:53.187953   45441 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:38:53.187982   45441 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.52 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-877742 NodeName:default-k8s-diff-port-877742 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:38:53.188157   45441 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.52
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-877742"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.52
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.52"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:38:53.188253   45441 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-877742 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-877742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0130 20:38:53.188320   45441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 20:38:53.200851   45441 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:38:53.200938   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:38:53.212897   45441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0130 20:38:53.231805   45441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:38:53.253428   45441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0130 20:38:53.274041   45441 ssh_runner.go:195] Run: grep 192.168.72.52	control-plane.minikube.internal$ /etc/hosts
	I0130 20:38:53.278499   45441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.52	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:38:53.295089   45441 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742 for IP: 192.168.72.52
	I0130 20:38:53.295126   45441 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:38:53.295326   45441 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:38:53.295393   45441 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:38:53.295497   45441 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.key
	I0130 20:38:53.295581   45441 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/apiserver.key.02e1fdc8
	I0130 20:38:53.295637   45441 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/proxy-client.key
	I0130 20:38:53.295774   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:38:53.295813   45441 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:38:53.295827   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:38:53.295864   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:38:53.295912   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:38:53.295948   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:38:53.296012   45441 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:38:53.296828   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:38:53.326150   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 20:38:53.356286   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:38:53.384496   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 20:38:53.414403   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:38:53.440628   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:38:53.465452   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:38:53.494321   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:38:53.520528   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:38:53.543933   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:38:53.569293   45441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:38:53.594995   45441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:38:53.615006   45441 ssh_runner.go:195] Run: openssl version
	I0130 20:38:53.622442   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:38:53.636482   45441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:38:53.642501   45441 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:38:53.642572   45441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:38:53.649251   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:38:53.661157   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:38:53.673453   45441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:53.678369   45441 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:53.678439   45441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:38:53.684812   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:38:53.696906   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:38:53.710065   45441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:38:53.714715   45441 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:38:53.714776   45441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:38:53.720458   45441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:38:53.733050   45441 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:38:53.737894   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:38:53.744337   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:38:53.750326   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:38:53.756139   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:38:53.761883   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:38:53.767633   45441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:38:53.773367   45441 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-877742 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-877742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.52 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:38:53.773480   45441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:38:53.773551   45441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:38:53.815095   45441 cri.go:89] found id: ""
	I0130 20:38:53.815159   45441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:38:53.826497   45441 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:38:53.826521   45441 kubeadm.go:636] restartCluster start
	I0130 20:38:53.826570   45441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:38:53.837155   45441 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:53.838622   45441 kubeconfig.go:92] found "default-k8s-diff-port-877742" server: "https://192.168.72.52:8444"
	I0130 20:38:53.841776   45441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:38:53.852124   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:53.852191   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:53.864432   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:54.353064   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:54.353141   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:54.365422   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:54.853083   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:54.853170   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:54.869932   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:55.352281   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:55.352360   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:55.369187   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:54.550999   45037 addons.go:505] enable addons completed in 2.639107358s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 20:38:54.692017   45037 node_ready.go:58] node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:56.740251   45037 node_ready.go:58] node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:55.809310   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:55.809708   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:55.809741   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:55.809655   46425 retry.go:31] will retry after 1.997080797s: waiting for machine to come up
	I0130 20:38:57.808397   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:38:57.808798   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:38:57.808829   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:38:57.808744   46425 retry.go:31] will retry after 3.605884761s: waiting for machine to come up
	I0130 20:38:55.852241   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:55.852356   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:55.864923   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:56.352455   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:56.352559   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:56.368458   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:56.853090   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:56.853175   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:56.869148   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:57.352965   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:57.353055   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:57.370697   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:57.852261   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:57.852391   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:57.868729   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:58.352147   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:58.352250   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:58.368543   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:58.852300   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:58.852373   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:58.868594   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:59.353039   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:59.353110   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:59.365593   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:59.852147   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:38:59.852276   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:38:59.865561   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:00.353077   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:00.353186   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:00.370006   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:38:59.242842   45037 node_ready.go:58] node "embed-certs-208583" has status "Ready":"False"
	I0130 20:38:59.739830   45037 node_ready.go:49] node "embed-certs-208583" has status "Ready":"True"
	I0130 20:38:59.739851   45037 node_ready.go:38] duration metric: took 7.503983369s waiting for node "embed-certs-208583" to be "Ready" ...
	I0130 20:38:59.739859   45037 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:38:59.746243   45037 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.751722   45037 pod_ready.go:92] pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace has status "Ready":"True"
	I0130 20:38:59.751745   45037 pod_ready.go:81] duration metric: took 5.480115ms waiting for pod "coredns-5dd5756b68-jqzzv" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.751752   45037 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.757152   45037 pod_ready.go:92] pod "etcd-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:38:59.757175   45037 pod_ready.go:81] duration metric: took 5.417291ms waiting for pod "etcd-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.757184   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.762156   45037 pod_ready.go:92] pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:38:59.762231   45037 pod_ready.go:81] duration metric: took 4.985076ms waiting for pod "kube-apiserver-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:38:59.762267   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:01.773853   45037 pod_ready.go:102] pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:01.415831   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:01.416304   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | unable to find current IP address of domain old-k8s-version-150971 in network mk-old-k8s-version-150971
	I0130 20:39:01.416345   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | I0130 20:39:01.416273   46425 retry.go:31] will retry after 3.558433109s: waiting for machine to come up
	I0130 20:39:00.852444   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:00.852545   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:00.865338   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:01.353002   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:01.353101   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:01.366419   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:01.853034   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:01.853114   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:01.866142   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:02.352652   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:02.352752   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:02.364832   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:02.852325   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:02.852406   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:02.864013   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:03.352408   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:03.352518   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:03.363939   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:03.853126   45441 api_server.go:166] Checking apiserver status ...
	I0130 20:39:03.853200   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:03.865047   45441 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:03.865084   45441 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:39:03.865094   45441 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:39:03.865105   45441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:39:03.865154   45441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:03.904863   45441 cri.go:89] found id: ""
	I0130 20:39:03.904932   45441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:39:03.922225   45441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:39:03.931861   45441 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:39:03.931915   45441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:03.941185   45441 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:03.941205   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.064230   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.627940   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.816900   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.893059   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:04.986288   45441 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:39:04.986362   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:06.448368   44923 start.go:369] acquired machines lock for "no-preload-473743" in 58.134425603s
	I0130 20:39:06.448435   44923 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:39:06.448443   44923 fix.go:54] fixHost starting: 
	I0130 20:39:06.448866   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:39:06.448900   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:39:06.468570   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43389
	I0130 20:39:06.468965   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:39:06.469552   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:39:06.469587   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:39:06.469950   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:39:06.470153   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:06.470312   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:39:06.472312   44923 fix.go:102] recreateIfNeeded on no-preload-473743: state=Stopped err=<nil>
	I0130 20:39:06.472337   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	W0130 20:39:06.472495   44923 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:39:06.474460   44923 out.go:177] * Restarting existing kvm2 VM for "no-preload-473743" ...
	I0130 20:39:04.976314   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.976787   45819 main.go:141] libmachine: (old-k8s-version-150971) Found IP for machine: 192.168.39.16
	I0130 20:39:04.976820   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has current primary IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.976830   45819 main.go:141] libmachine: (old-k8s-version-150971) Reserving static IP address...
	I0130 20:39:04.977271   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "old-k8s-version-150971", mac: "52:54:00:6e:fe:f8", ip: "192.168.39.16"} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:04.977300   45819 main.go:141] libmachine: (old-k8s-version-150971) Reserved static IP address: 192.168.39.16
	I0130 20:39:04.977325   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | skip adding static IP to network mk-old-k8s-version-150971 - found existing host DHCP lease matching {name: "old-k8s-version-150971", mac: "52:54:00:6e:fe:f8", ip: "192.168.39.16"}
	I0130 20:39:04.977345   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Getting to WaitForSSH function...
	I0130 20:39:04.977361   45819 main.go:141] libmachine: (old-k8s-version-150971) Waiting for SSH to be available...
	I0130 20:39:04.979621   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.980015   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:04.980042   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:04.980138   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Using SSH client type: external
	I0130 20:39:04.980164   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa (-rw-------)
	I0130 20:39:04.980206   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:39:04.980221   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | About to run SSH command:
	I0130 20:39:04.980259   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | exit 0
	I0130 20:39:05.079758   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | SSH cmd err, output: <nil>: 
	I0130 20:39:05.080092   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetConfigRaw
	I0130 20:39:05.080846   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:05.083636   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.084019   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.084062   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.084354   45819 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/config.json ...
	I0130 20:39:05.084608   45819 machine.go:88] provisioning docker machine ...
	I0130 20:39:05.084635   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:05.084845   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetMachineName
	I0130 20:39:05.085031   45819 buildroot.go:166] provisioning hostname "old-k8s-version-150971"
	I0130 20:39:05.085063   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetMachineName
	I0130 20:39:05.085221   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.087561   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.087930   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.087963   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.088067   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.088220   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.088384   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.088550   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.088736   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:05.089124   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:05.089141   45819 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-150971 && echo "old-k8s-version-150971" | sudo tee /etc/hostname
	I0130 20:39:05.232496   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-150971
	
	I0130 20:39:05.232528   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.234898   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.235190   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.235227   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.235310   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.235515   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.235655   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.235791   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.235921   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:05.236233   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:05.236251   45819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-150971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-150971/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-150971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:39:05.370716   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:39:05.370753   45819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:39:05.370774   45819 buildroot.go:174] setting up certificates
	I0130 20:39:05.370787   45819 provision.go:83] configureAuth start
	I0130 20:39:05.370800   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetMachineName
	I0130 20:39:05.371158   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:05.373602   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.373946   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.373970   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.374153   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.376230   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.376617   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.376657   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.376763   45819 provision.go:138] copyHostCerts
	I0130 20:39:05.376816   45819 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:39:05.376826   45819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:39:05.376892   45819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:39:05.377066   45819 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:39:05.377079   45819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:39:05.377113   45819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:39:05.377205   45819 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:39:05.377216   45819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:39:05.377243   45819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:39:05.377336   45819 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-150971 san=[192.168.39.16 192.168.39.16 localhost 127.0.0.1 minikube old-k8s-version-150971]
	I0130 20:39:05.649128   45819 provision.go:172] copyRemoteCerts
	I0130 20:39:05.649183   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:39:05.649206   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.652019   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.652353   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.652385   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.652657   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.652857   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.653048   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.653207   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:05.753981   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0130 20:39:05.782847   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 20:39:05.810083   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:39:05.836967   45819 provision.go:86] duration metric: configureAuth took 466.16712ms
	I0130 20:39:05.836989   45819 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:39:05.837156   45819 config.go:182] Loaded profile config "old-k8s-version-150971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 20:39:05.837222   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:05.840038   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.840384   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:05.840422   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:05.840597   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:05.840832   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.841019   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:05.841167   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:05.841338   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:05.841681   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:05.841700   45819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:39:06.170121   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:39:06.170151   45819 machine.go:91] provisioned docker machine in 1.08552444s
	I0130 20:39:06.170163   45819 start.go:300] post-start starting for "old-k8s-version-150971" (driver="kvm2")
	I0130 20:39:06.170183   45819 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:39:06.170202   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.170544   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:39:06.170583   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.173794   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.174165   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.174192   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.174421   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.174620   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.174804   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.174947   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:06.273272   45819 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:39:06.277900   45819 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:39:06.277928   45819 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:39:06.278010   45819 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:39:06.278099   45819 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:39:06.278207   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:39:06.286905   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:06.311772   45819 start.go:303] post-start completed in 141.592454ms
	I0130 20:39:06.311808   45819 fix.go:56] fixHost completed within 20.175639407s
	I0130 20:39:06.311832   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.314627   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.314998   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.315027   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.315179   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.315402   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.315585   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.315758   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.315936   45819 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:06.316254   45819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0130 20:39:06.316269   45819 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:39:06.448193   45819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647146.389757507
	
	I0130 20:39:06.448219   45819 fix.go:206] guest clock: 1706647146.389757507
	I0130 20:39:06.448230   45819 fix.go:219] Guest: 2024-01-30 20:39:06.389757507 +0000 UTC Remote: 2024-01-30 20:39:06.311812895 +0000 UTC m=+176.717060563 (delta=77.944612ms)
	I0130 20:39:06.448277   45819 fix.go:190] guest clock delta is within tolerance: 77.944612ms
	I0130 20:39:06.448285   45819 start.go:83] releasing machines lock for "old-k8s-version-150971", held for 20.312150878s
	I0130 20:39:06.448318   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.448584   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:06.451978   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.452448   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.452475   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.452632   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.453188   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.453364   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:39:06.453450   45819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:39:06.453501   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.453604   45819 ssh_runner.go:195] Run: cat /version.json
	I0130 20:39:06.453622   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:39:06.456426   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.456694   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.456722   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.456743   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.457015   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.457218   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:06.457228   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.457266   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:06.457473   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.457483   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:39:06.457648   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:39:06.457657   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:06.457834   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:39:06.457945   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:39:06.575025   45819 ssh_runner.go:195] Run: systemctl --version
	I0130 20:39:06.580884   45819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:39:06.730119   45819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:39:06.737872   45819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:39:06.737945   45819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:39:06.752952   45819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:39:06.752987   45819 start.go:475] detecting cgroup driver to use...
	I0130 20:39:06.753062   45819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:39:06.772925   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:39:06.787880   45819 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:39:06.787957   45819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:39:06.805662   45819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:39:06.820819   45819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:39:06.941809   45819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:39:07.067216   45819 docker.go:233] disabling docker service ...
	I0130 20:39:07.067299   45819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:39:07.084390   45819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:39:07.099373   45819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:39:07.242239   45819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:39:07.378297   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:39:07.390947   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:39:07.414177   45819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0130 20:39:07.414256   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.427074   45819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:39:07.427154   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.439058   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.451547   45819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:07.462473   45819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:39:07.474082   45819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:39:07.484883   45819 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:39:07.484943   45819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:39:07.502181   45819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:39:07.511315   45819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:39:07.677114   45819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:39:07.878176   45819 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:39:07.878247   45819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:39:07.885855   45819 start.go:543] Will wait 60s for crictl version
	I0130 20:39:07.885918   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:07.895480   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:39:07.946256   45819 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:39:07.946344   45819 ssh_runner.go:195] Run: crio --version
	I0130 20:39:07.999647   45819 ssh_runner.go:195] Run: crio --version
	I0130 20:39:08.064335   45819 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0130 20:39:04.270868   45037 pod_ready.go:92] pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:04.270900   45037 pod_ready.go:81] duration metric: took 4.508624463s waiting for pod "kube-controller-manager-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.270911   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.276806   45037 pod_ready.go:92] pod "kube-proxy-g7q5t" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:04.276830   45037 pod_ready.go:81] duration metric: took 5.914142ms waiting for pod "kube-proxy-g7q5t" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.276839   45037 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.283207   45037 pod_ready.go:92] pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:04.283225   45037 pod_ready.go:81] duration metric: took 6.380407ms waiting for pod "kube-scheduler-embed-certs-208583" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:04.283235   45037 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:06.291591   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:08.318273   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:08.065754   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetIP
	I0130 20:39:08.068986   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:08.069433   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:39:08.069477   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:39:08.069665   45819 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 20:39:08.074101   45819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:08.088404   45819 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 20:39:08.088468   45819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:39:08.133749   45819 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0130 20:39:08.133831   45819 ssh_runner.go:195] Run: which lz4
	I0130 20:39:08.138114   45819 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 20:39:08.142668   45819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:39:08.142709   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0130 20:39:05.487066   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:05.987386   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:06.486465   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:06.987491   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:07.486540   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:07.518826   45441 api_server.go:72] duration metric: took 2.532536561s to wait for apiserver process to appear ...
	I0130 20:39:07.518852   45441 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:39:07.518875   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:06.475902   44923 main.go:141] libmachine: (no-preload-473743) Calling .Start
	I0130 20:39:06.476095   44923 main.go:141] libmachine: (no-preload-473743) Ensuring networks are active...
	I0130 20:39:06.476929   44923 main.go:141] libmachine: (no-preload-473743) Ensuring network default is active
	I0130 20:39:06.477344   44923 main.go:141] libmachine: (no-preload-473743) Ensuring network mk-no-preload-473743 is active
	I0130 20:39:06.477817   44923 main.go:141] libmachine: (no-preload-473743) Getting domain xml...
	I0130 20:39:06.478643   44923 main.go:141] libmachine: (no-preload-473743) Creating domain...
	I0130 20:39:07.834909   44923 main.go:141] libmachine: (no-preload-473743) Waiting to get IP...
	I0130 20:39:07.835906   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:07.836320   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:07.836371   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:07.836287   46613 retry.go:31] will retry after 205.559104ms: waiting for machine to come up
	I0130 20:39:08.043926   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:08.044522   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:08.044607   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:08.044570   46613 retry.go:31] will retry after 291.055623ms: waiting for machine to come up
	I0130 20:39:08.337157   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:08.337756   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:08.337859   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:08.337823   46613 retry.go:31] will retry after 462.903788ms: waiting for machine to come up
	I0130 20:39:08.802588   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:08.803397   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:08.803497   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:08.803459   46613 retry.go:31] will retry after 497.808285ms: waiting for machine to come up
	I0130 20:39:09.303349   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:09.304015   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:09.304037   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:09.303936   46613 retry.go:31] will retry after 569.824748ms: waiting for machine to come up
	I0130 20:39:09.875816   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:09.876316   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:09.876348   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:09.876259   46613 retry.go:31] will retry after 589.654517ms: waiting for machine to come up
	I0130 20:39:10.467029   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:10.467568   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:10.467601   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:10.467520   46613 retry.go:31] will retry after 857.069247ms: waiting for machine to come up
	I0130 20:39:10.796542   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:13.290072   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:09.980254   45819 crio.go:444] Took 1.842164 seconds to copy over tarball
	I0130 20:39:09.980328   45819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 20:39:13.116148   45819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.135783447s)
	I0130 20:39:13.116184   45819 crio.go:451] Took 3.135904 seconds to extract the tarball
	I0130 20:39:13.116196   45819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 20:39:13.161285   45819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:39:13.226970   45819 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0130 20:39:13.227008   45819 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 20:39:13.227096   45819 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.227151   45819 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.227153   45819 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.227173   45819 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.227121   45819 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:13.227155   45819 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0130 20:39:13.227439   45819 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.227117   45819 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.229003   45819 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.229038   45819 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:13.229065   45819 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.229112   45819 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.229011   45819 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0130 20:39:13.229170   45819 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.229177   45819 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.229217   45819 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.443441   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.484878   45819 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0130 20:39:13.484941   45819 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.485021   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.489291   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0130 20:39:13.526847   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.526966   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.527312   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0130 20:39:13.528949   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.532002   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0130 20:39:13.532309   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.532701   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.662312   45819 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0130 20:39:13.662355   45819 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.662422   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.669155   45819 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0130 20:39:13.669201   45819 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.669265   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708339   45819 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0130 20:39:13.708373   45819 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0130 20:39:13.708398   45819 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0130 20:39:13.708404   45819 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.708435   45819 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0130 20:39:13.708470   45819 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.708476   45819 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0130 20:39:13.708491   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0130 20:39:13.708507   45819 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.708508   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708451   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708443   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.708565   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0130 20:39:13.708549   45819 ssh_runner.go:195] Run: which crictl
	I0130 20:39:13.767721   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 20:39:13.767762   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0130 20:39:13.767789   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0130 20:39:13.767835   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0130 20:39:13.767869   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0130 20:39:13.767935   45819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0130 20:39:13.816661   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0130 20:39:13.863740   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0130 20:39:13.863751   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0130 20:39:13.863798   45819 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0130 20:39:14.096216   45819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:14.241457   45819 cache_images.go:92] LoadImages completed in 1.014424533s
	W0130 20:39:14.241562   45819 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0130 20:39:14.241641   45819 ssh_runner.go:195] Run: crio config
	I0130 20:39:14.307624   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:39:14.307644   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:14.307673   45819 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:39:14.307696   45819 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-150971 NodeName:old-k8s-version-150971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0130 20:39:14.307866   45819 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-150971"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-150971
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.16:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:39:14.307973   45819 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-150971 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:39:14.308042   45819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0130 20:39:14.318757   45819 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:39:14.318830   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:39:14.329640   45819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0130 20:39:14.347498   45819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:39:14.365403   45819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0130 20:39:14.383846   45819 ssh_runner.go:195] Run: grep 192.168.39.16	control-plane.minikube.internal$ /etc/hosts
	I0130 20:39:14.388138   45819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:14.402420   45819 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971 for IP: 192.168.39.16
	I0130 20:39:14.402483   45819 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:39:14.402661   45819 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:39:14.402707   45819 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:39:14.402780   45819 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.key
	I0130 20:39:14.402837   45819 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/apiserver.key.5918fcb3
	I0130 20:39:14.402877   45819 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/proxy-client.key
	I0130 20:39:14.403025   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:39:14.403076   45819 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:39:14.403094   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:39:14.403131   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:39:14.403171   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:39:14.403206   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:39:14.403290   45819 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:14.404157   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:39:14.430902   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 20:39:14.454554   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:39:14.482335   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 20:39:14.505963   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:39:14.532616   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:39:14.558930   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:39:14.585784   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:39:14.609214   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:39:14.635743   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:39:12.268901   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:12.268934   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:12.268948   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:12.307051   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:12.307093   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:12.519619   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:12.530857   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:12.530904   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:13.019370   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:13.024544   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:13.024577   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:13.519023   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:13.525748   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:13.525784   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:14.019318   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:14.026570   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:14.026600   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:14.519000   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:15.074306   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:15.074341   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:15.074353   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:15.081035   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:15.081075   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:11.325993   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:11.326475   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:11.326506   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:11.326439   46613 retry.go:31] will retry after 994.416536ms: waiting for machine to come up
	I0130 20:39:12.323190   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:12.323897   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:12.323924   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:12.323807   46613 retry.go:31] will retry after 1.746704262s: waiting for machine to come up
	I0130 20:39:14.072583   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:14.073100   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:14.073158   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:14.073072   46613 retry.go:31] will retry after 2.322781715s: waiting for machine to come up
	I0130 20:39:15.519005   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:15.609496   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:39:15.609529   45441 api_server.go:103] status: https://192.168.72.52:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:39:16.018990   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:39:16.024390   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 200:
	ok
	I0130 20:39:16.037151   45441 api_server.go:141] control plane version: v1.28.4
	I0130 20:39:16.037191   45441 api_server.go:131] duration metric: took 8.518327222s to wait for apiserver health ...
	I0130 20:39:16.037203   45441 cni.go:84] Creating CNI manager for ""
	I0130 20:39:16.037211   45441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:16.039114   45441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:39:15.290788   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:17.292552   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:14.662372   45819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:39:14.814291   45819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:39:14.832453   45819 ssh_runner.go:195] Run: openssl version
	I0130 20:39:14.838238   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:39:14.848628   45819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:39:14.853713   45819 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:39:14.853761   45819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:39:14.859768   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:39:14.870658   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:39:14.881444   45819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:14.886241   45819 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:14.886290   45819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:14.892197   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:39:14.903459   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:39:14.914463   45819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:39:14.919337   45819 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:39:14.919413   45819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:39:14.925258   45819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:39:14.935893   45819 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:39:14.941741   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:39:14.948871   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:39:14.955038   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:39:14.961605   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:39:14.967425   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:39:14.973845   45819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:39:14.980072   45819 kubeadm.go:404] StartCluster: {Name:old-k8s-version-150971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-150971 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:39:14.980218   45819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:39:14.980265   45819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:15.021821   45819 cri.go:89] found id: ""
	I0130 20:39:15.021920   45819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:39:15.033604   45819 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:39:15.033629   45819 kubeadm.go:636] restartCluster start
	I0130 20:39:15.033686   45819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:39:15.044324   45819 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:15.045356   45819 kubeconfig.go:92] found "old-k8s-version-150971" server: "https://192.168.39.16:8443"
	I0130 20:39:15.047610   45819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:39:15.057690   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:15.057746   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:15.073207   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:15.558392   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:15.558480   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:15.574711   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:16.057794   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:16.057971   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:16.073882   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:16.557808   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:16.557879   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:16.571659   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:17.057817   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:17.057922   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:17.074250   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:17.557727   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:17.557809   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:17.573920   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:18.058504   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:18.058573   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:18.070636   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:18.558163   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:18.558262   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:18.570781   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:19.058321   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:19.058414   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:19.074887   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:19.558503   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:19.558596   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:19.570666   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:16.040606   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:39:16.065469   45441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:39:16.099751   45441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:39:16.113444   45441 system_pods.go:59] 8 kube-system pods found
	I0130 20:39:16.113486   45441 system_pods.go:61] "coredns-5dd5756b68-2955f" [abae9f5c-ed48-494b-b014-a28f6290d772] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:39:16.113498   45441 system_pods.go:61] "etcd-default-k8s-diff-port-877742" [0f69a8d9-5549-4f3a-8b12-ee9e96e08271] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 20:39:16.113509   45441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-877742" [ab6cf2c3-cc75-44b8-b039-6e21881a9ade] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 20:39:16.113519   45441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-877742" [4b313734-cd1e-4229-afcd-4d0b517594ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 20:39:16.113533   45441 system_pods.go:61] "kube-proxy-s9ssn" [ea1c69e6-d319-41ee-a47f-4937f03ecdc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 20:39:16.113549   45441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-877742" [3f4d9e5f-1421-4576-839b-3bdfba56700b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 20:39:16.113566   45441 system_pods.go:61] "metrics-server-57f55c9bc5-hzfwg" [1e06ac92-f7ff-418a-9a8d-72d763566bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:39:16.113582   45441 system_pods.go:61] "storage-provisioner" [4cf793ab-e7a5-4a51-bcb9-a07bea323a44] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:39:16.113599   45441 system_pods.go:74] duration metric: took 13.827445ms to wait for pod list to return data ...
	I0130 20:39:16.113608   45441 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:39:16.121786   45441 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:39:16.121882   45441 node_conditions.go:123] node cpu capacity is 2
	I0130 20:39:16.121904   45441 node_conditions.go:105] duration metric: took 8.289345ms to run NodePressure ...
	I0130 20:39:16.121929   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:16.440112   45441 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:39:16.447160   45441 kubeadm.go:787] kubelet initialised
	I0130 20:39:16.447188   45441 kubeadm.go:788] duration metric: took 7.04624ms waiting for restarted kubelet to initialise ...
	I0130 20:39:16.447198   45441 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:39:16.457164   45441 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-2955f" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:16.463990   45441 pod_ready.go:97] node "default-k8s-diff-port-877742" hosting pod "coredns-5dd5756b68-2955f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.464020   45441 pod_ready.go:81] duration metric: took 6.825543ms waiting for pod "coredns-5dd5756b68-2955f" in "kube-system" namespace to be "Ready" ...
	E0130 20:39:16.464033   45441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-877742" hosting pod "coredns-5dd5756b68-2955f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.464044   45441 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:16.476983   45441 pod_ready.go:97] node "default-k8s-diff-port-877742" hosting pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.477077   45441 pod_ready.go:81] duration metric: took 12.988392ms waiting for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	E0130 20:39:16.477109   45441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-877742" hosting pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-877742" has status "Ready":"False"
	I0130 20:39:16.477128   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:18.486083   45441 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:16.397588   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:16.398050   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:16.398082   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:16.397988   46613 retry.go:31] will retry after 2.411227582s: waiting for machine to come up
	I0130 20:39:18.810874   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:18.811404   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:18.811439   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:18.811358   46613 retry.go:31] will retry after 2.231016506s: waiting for machine to come up
	I0130 20:39:19.296383   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:21.790307   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:20.058718   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:20.058800   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:20.074443   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:20.558683   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:20.558756   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:20.574765   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:21.058367   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:21.058456   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:21.074652   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:21.558528   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:21.558648   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:21.573650   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:22.058161   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:22.058280   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:22.070780   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:22.558448   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:22.558525   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:22.572220   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:23.057797   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:23.057884   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:23.071260   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:23.558193   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:23.558278   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:23.571801   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:24.058483   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:24.058603   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:24.070898   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:24.558465   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:24.558546   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:24.573966   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:21.008056   45441 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:23.484244   45441 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:23.987592   45441 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:23.987615   45441 pod_ready.go:81] duration metric: took 7.510477497s waiting for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.987624   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.993335   45441 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:23.993358   45441 pod_ready.go:81] duration metric: took 5.726687ms waiting for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.993373   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s9ssn" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.998021   45441 pod_ready.go:92] pod "kube-proxy-s9ssn" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:23.998045   45441 pod_ready.go:81] duration metric: took 4.664039ms waiting for pod "kube-proxy-s9ssn" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:23.998057   45441 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:21.044853   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:21.045392   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:21.045423   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:21.045336   46613 retry.go:31] will retry after 3.525646558s: waiting for machine to come up
	I0130 20:39:24.573139   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:24.573573   44923 main.go:141] libmachine: (no-preload-473743) DBG | unable to find current IP address of domain no-preload-473743 in network mk-no-preload-473743
	I0130 20:39:24.573596   44923 main.go:141] libmachine: (no-preload-473743) DBG | I0130 20:39:24.573532   46613 retry.go:31] will retry after 4.365207536s: waiting for machine to come up
	I0130 20:39:23.790893   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:25.791630   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:28.291352   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:25.058653   45819 api_server.go:166] Checking apiserver status ...
	I0130 20:39:25.058753   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:25.072061   45819 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:25.072091   45819 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:39:25.072115   45819 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:39:25.072127   45819 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:39:25.072183   45819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:25.121788   45819 cri.go:89] found id: ""
	I0130 20:39:25.121863   45819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:39:25.137294   45819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:39:25.146157   45819 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:39:25.146213   45819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:25.155323   45819 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:39:25.155346   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:25.279765   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:26.617419   45819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.337617183s)
	I0130 20:39:26.617457   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:26.825384   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:26.916818   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:27.026546   45819 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:39:27.026647   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:27.527637   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:28.026724   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:28.527352   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:39:28.578771   45819 api_server.go:72] duration metric: took 1.552227614s to wait for apiserver process to appear ...
	I0130 20:39:28.578793   45819 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:39:28.578814   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:28.579348   45819 api_server.go:269] stopped: https://192.168.39.16:8443/healthz: Get "https://192.168.39.16:8443/healthz": dial tcp 192.168.39.16:8443: connect: connection refused
	I0130 20:39:29.078918   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:26.006018   45441 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:27.506562   45441 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:39:27.506596   45441 pod_ready.go:81] duration metric: took 3.50852897s waiting for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:27.506609   45441 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace to be "Ready" ...
	I0130 20:39:29.514067   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:28.941922   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.942489   44923 main.go:141] libmachine: (no-preload-473743) Found IP for machine: 192.168.50.220
	I0130 20:39:28.942511   44923 main.go:141] libmachine: (no-preload-473743) Reserving static IP address...
	I0130 20:39:28.942528   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has current primary IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.943003   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "no-preload-473743", mac: "52:54:00:c5:07:4a", ip: "192.168.50.220"} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:28.943046   44923 main.go:141] libmachine: (no-preload-473743) DBG | skip adding static IP to network mk-no-preload-473743 - found existing host DHCP lease matching {name: "no-preload-473743", mac: "52:54:00:c5:07:4a", ip: "192.168.50.220"}
	I0130 20:39:28.943063   44923 main.go:141] libmachine: (no-preload-473743) Reserved static IP address: 192.168.50.220
	I0130 20:39:28.943081   44923 main.go:141] libmachine: (no-preload-473743) DBG | Getting to WaitForSSH function...
	I0130 20:39:28.943092   44923 main.go:141] libmachine: (no-preload-473743) Waiting for SSH to be available...
	I0130 20:39:28.945617   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.945983   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:28.946016   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:28.946192   44923 main.go:141] libmachine: (no-preload-473743) DBG | Using SSH client type: external
	I0130 20:39:28.946224   44923 main.go:141] libmachine: (no-preload-473743) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa (-rw-------)
	I0130 20:39:28.946257   44923 main.go:141] libmachine: (no-preload-473743) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:39:28.946268   44923 main.go:141] libmachine: (no-preload-473743) DBG | About to run SSH command:
	I0130 20:39:28.946279   44923 main.go:141] libmachine: (no-preload-473743) DBG | exit 0
	I0130 20:39:29.047127   44923 main.go:141] libmachine: (no-preload-473743) DBG | SSH cmd err, output: <nil>: 
	I0130 20:39:29.047505   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetConfigRaw
	I0130 20:39:29.048239   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:29.051059   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.051539   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.051572   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.051875   44923 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/config.json ...
	I0130 20:39:29.052098   44923 machine.go:88] provisioning docker machine ...
	I0130 20:39:29.052122   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:29.052328   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetMachineName
	I0130 20:39:29.052480   44923 buildroot.go:166] provisioning hostname "no-preload-473743"
	I0130 20:39:29.052503   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetMachineName
	I0130 20:39:29.052693   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.055532   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.055937   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.055968   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.056075   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.056267   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.056428   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.056644   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.056802   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:29.057242   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:29.057266   44923 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-473743 && echo "no-preload-473743" | sudo tee /etc/hostname
	I0130 20:39:29.199944   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-473743
	
	I0130 20:39:29.199987   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.202960   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.203402   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.203428   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.203648   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.203840   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.203974   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.204101   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.204253   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:29.204787   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:29.204815   44923 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-473743' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-473743/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-473743' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:39:29.343058   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:39:29.343090   44923 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:39:29.343118   44923 buildroot.go:174] setting up certificates
	I0130 20:39:29.343131   44923 provision.go:83] configureAuth start
	I0130 20:39:29.343154   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetMachineName
	I0130 20:39:29.343457   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:29.346265   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.346671   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.346714   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.346889   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.349402   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.349799   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.349830   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.350015   44923 provision.go:138] copyHostCerts
	I0130 20:39:29.350079   44923 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:39:29.350092   44923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:39:29.350146   44923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:39:29.350244   44923 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:39:29.350253   44923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:39:29.350277   44923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:39:29.350343   44923 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:39:29.350354   44923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:39:29.350371   44923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:39:29.350428   44923 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.no-preload-473743 san=[192.168.50.220 192.168.50.220 localhost 127.0.0.1 minikube no-preload-473743]
	I0130 20:39:29.671070   44923 provision.go:172] copyRemoteCerts
	I0130 20:39:29.671125   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:39:29.671150   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.673890   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.674199   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.674234   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.674386   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.674604   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.674744   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.674901   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:29.769184   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:39:29.797035   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 20:39:29.822932   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 20:39:29.849781   44923 provision.go:86] duration metric: configureAuth took 506.627652ms
	I0130 20:39:29.849818   44923 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:39:29.850040   44923 config.go:182] Loaded profile config "no-preload-473743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 20:39:29.850134   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:29.852709   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.853108   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:29.853137   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:29.853331   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:29.853574   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.853757   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:29.853924   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:29.854108   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:29.854635   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:29.854660   44923 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:39:30.232249   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:39:30.232288   44923 machine.go:91] provisioned docker machine in 1.180174143s
	I0130 20:39:30.232302   44923 start.go:300] post-start starting for "no-preload-473743" (driver="kvm2")
	I0130 20:39:30.232321   44923 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:39:30.232348   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.232668   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:39:30.232705   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.235383   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.235716   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.235745   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.235860   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.236049   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.236203   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.236346   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:30.332330   44923 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:39:30.337659   44923 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:39:30.337684   44923 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:39:30.337756   44923 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:39:30.337847   44923 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:39:30.337960   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:39:30.349830   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:30.374759   44923 start.go:303] post-start completed in 142.443985ms
	I0130 20:39:30.374780   44923 fix.go:56] fixHost completed within 23.926338591s
	I0130 20:39:30.374800   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.377807   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.378189   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.378244   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.378414   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.378605   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.378803   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.378954   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.379112   44923 main.go:141] libmachine: Using SSH client type: native
	I0130 20:39:30.379649   44923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.220 22 <nil> <nil>}
	I0130 20:39:30.379677   44923 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:39:30.512888   44923 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706647170.453705676
	
	I0130 20:39:30.512916   44923 fix.go:206] guest clock: 1706647170.453705676
	I0130 20:39:30.512925   44923 fix.go:219] Guest: 2024-01-30 20:39:30.453705676 +0000 UTC Remote: 2024-01-30 20:39:30.374783103 +0000 UTC m=+364.620017880 (delta=78.922573ms)
	I0130 20:39:30.512966   44923 fix.go:190] guest clock delta is within tolerance: 78.922573ms
	I0130 20:39:30.512976   44923 start.go:83] releasing machines lock for "no-preload-473743", held for 24.064563389s
	I0130 20:39:30.513083   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.513387   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:30.516359   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.516699   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.516728   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.516908   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.517590   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.517747   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:39:30.517817   44923 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:39:30.517864   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.517954   44923 ssh_runner.go:195] Run: cat /version.json
	I0130 20:39:30.517972   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:39:30.520814   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521070   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521202   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.521228   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521456   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:30.521480   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:30.521480   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.521682   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.521722   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:39:30.521844   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.521845   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:39:30.521997   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:30.522149   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:39:30.522424   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:39:30.632970   44923 ssh_runner.go:195] Run: systemctl --version
	I0130 20:39:30.638936   44923 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:39:30.784288   44923 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:39:30.792079   44923 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:39:30.792150   44923 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:39:30.809394   44923 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:39:30.809421   44923 start.go:475] detecting cgroup driver to use...
	I0130 20:39:30.809496   44923 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:39:30.824383   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:39:30.838710   44923 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:39:30.838765   44923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:39:30.852928   44923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:39:30.867162   44923 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:39:30.995737   44923 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:39:31.113661   44923 docker.go:233] disabling docker service ...
	I0130 20:39:31.113726   44923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:39:31.127737   44923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:39:31.139320   44923 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:39:31.240000   44923 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:39:31.340063   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:39:31.353303   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:39:31.371834   44923 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:39:31.371889   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.382579   44923 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:39:31.382639   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.392544   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.403023   44923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:39:31.413288   44923 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:39:31.423806   44923 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:39:31.433817   44923 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:39:31.433884   44923 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:39:31.447456   44923 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:39:31.457035   44923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:39:31.562847   44923 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:39:31.752772   44923 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:39:31.752844   44923 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:39:31.757880   44923 start.go:543] Will wait 60s for crictl version
	I0130 20:39:31.757943   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:31.761967   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:39:31.800658   44923 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:39:31.800725   44923 ssh_runner.go:195] Run: crio --version
	I0130 20:39:31.852386   44923 ssh_runner.go:195] Run: crio --version
	I0130 20:39:31.910758   44923 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0130 20:39:30.791795   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:33.292307   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:34.079616   45819 api_server.go:269] stopped: https://192.168.39.16:8443/healthz: Get "https://192.168.39.16:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0130 20:39:34.079674   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:31.516571   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:33.517547   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:31.912241   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetIP
	I0130 20:39:31.915377   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:31.915705   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:39:31.915735   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:39:31.915985   44923 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0130 20:39:31.920870   44923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:31.934252   44923 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 20:39:31.934304   44923 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:39:31.975687   44923 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0130 20:39:31.975714   44923 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 20:39:31.975762   44923 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:31.975874   44923 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:31.975900   44923 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:31.975936   44923 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0130 20:39:31.975959   44923 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:31.975903   44923 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:31.976051   44923 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:31.976063   44923 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:31.977466   44923 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:31.977485   44923 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:31.977525   44923 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0130 20:39:31.977531   44923 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:31.977569   44923 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:31.977559   44923 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:31.977663   44923 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:31.977812   44923 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:32.130396   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0130 20:39:32.132105   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:32.135297   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:32.135817   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:32.136698   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:32.154928   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:32.172264   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:32.355420   44923 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0130 20:39:32.355504   44923 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:32.355537   44923 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0130 20:39:32.355580   44923 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:32.355454   44923 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0130 20:39:32.355636   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355675   44923 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:32.355606   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355724   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355763   44923 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0130 20:39:32.355803   44923 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:32.355844   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355855   44923 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0130 20:39:32.355884   44923 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:32.355805   44923 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0130 20:39:32.355928   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.355929   44923 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:32.355974   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:32.360081   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 20:39:32.370164   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0130 20:39:32.370202   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0130 20:39:32.370243   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 20:39:32.370259   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 20:39:32.370374   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 20:39:32.466609   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.466714   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.503174   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:32.503293   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:32.507888   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0130 20:39:32.507963   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0130 20:39:32.508061   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0130 20:39:32.508061   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0130 20:39:32.518772   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:32.518883   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0130 20:39:32.518906   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.518932   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:32.518951   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 20:39:32.518824   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0130 20:39:32.518996   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0130 20:39:32.519041   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 20:39:32.521450   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0130 20:39:32.521493   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0130 20:39:32.848844   44923 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:34.579929   44923 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.060972543s)
	I0130 20:39:34.579971   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0130 20:39:34.580001   44923 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.060936502s)
	I0130 20:39:34.580034   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0130 20:39:34.580045   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.061073363s)
	I0130 20:39:34.580059   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0130 20:39:34.580082   44923 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.731208309s)
	I0130 20:39:34.580132   44923 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0130 20:39:34.580088   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:34.580225   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 20:39:34.580173   44923 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:34.580343   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:39:34.585311   44923 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:39:34.796586   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:34.796615   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:34.796633   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:34.846035   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:39:34.846071   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:39:35.079544   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:35.091673   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 20:39:35.091710   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 20:39:35.579233   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:35.587045   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 20:39:35.587071   45819 api_server.go:103] status: https://192.168.39.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 20:39:36.079775   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:39:36.086927   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0130 20:39:36.095953   45819 api_server.go:141] control plane version: v1.16.0
	I0130 20:39:36.095976   45819 api_server.go:131] duration metric: took 7.517178171s to wait for apiserver health ...
	I0130 20:39:36.095985   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:39:36.095992   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:36.097742   45819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:39:35.792385   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:37.792648   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:36.099012   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:39:36.108427   45819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:39:36.126083   45819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:39:36.138855   45819 system_pods.go:59] 8 kube-system pods found
	I0130 20:39:36.138882   45819 system_pods.go:61] "coredns-5644d7b6d9-547k4" [6b1119fe-9c8a-44fb-b813-58271228b290] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:39:36.138888   45819 system_pods.go:61] "coredns-5644d7b6d9-dtfzh" [4cbd4f36-bc01-4f55-ba50-b7dcdcb35b9b] Running
	I0130 20:39:36.138894   45819 system_pods.go:61] "etcd-old-k8s-version-150971" [22eeed2c-7454-4b9d-8b4d-1c9a2e5feaf7] Running
	I0130 20:39:36.138899   45819 system_pods.go:61] "kube-apiserver-old-k8s-version-150971" [5ef062e6-2f78-485f-9420-e8714128e39f] Running
	I0130 20:39:36.138903   45819 system_pods.go:61] "kube-controller-manager-old-k8s-version-150971" [4e5df6df-486e-47a8-89b8-8d962212ec3e] Running
	I0130 20:39:36.138907   45819 system_pods.go:61] "kube-proxy-ncl7z" [51b28456-0070-46fc-b647-e28d6bdcfde2] Running
	I0130 20:39:36.138914   45819 system_pods.go:61] "kube-scheduler-old-k8s-version-150971" [384c4dfa-180b-4ec3-9e12-3c6d9e581c0e] Running
	I0130 20:39:36.138918   45819 system_pods.go:61] "storage-provisioner" [8a75a04c-1b80-41f6-9dfd-a7ee6f908b9d] Running
	I0130 20:39:36.138928   45819 system_pods.go:74] duration metric: took 12.820934ms to wait for pod list to return data ...
	I0130 20:39:36.138936   45819 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:39:36.142193   45819 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:39:36.142224   45819 node_conditions.go:123] node cpu capacity is 2
	I0130 20:39:36.142236   45819 node_conditions.go:105] duration metric: took 3.295582ms to run NodePressure ...
	I0130 20:39:36.142256   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:39:36.480656   45819 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:39:36.486153   45819 retry.go:31] will retry after 323.854639ms: kubelet not initialised
	I0130 20:39:36.816707   45819 retry.go:31] will retry after 303.422684ms: kubelet not initialised
	I0130 20:39:37.125369   45819 retry.go:31] will retry after 697.529029ms: kubelet not initialised
	I0130 20:39:37.829322   45819 retry.go:31] will retry after 626.989047ms: kubelet not initialised
	I0130 20:39:38.463635   45819 retry.go:31] will retry after 1.390069174s: kubelet not initialised
	I0130 20:39:35.519218   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:38.013652   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:40.014621   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:37.168054   44923 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.582708254s)
	I0130 20:39:37.168111   44923 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0130 20:39:37.168188   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.587929389s)
	I0130 20:39:37.168204   44923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0130 20:39:37.168226   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0130 20:39:37.168257   44923 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0130 20:39:37.168330   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0130 20:39:37.173865   44923 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0130 20:39:39.259662   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.091304493s)
	I0130 20:39:39.259692   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0130 20:39:39.259719   44923 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0130 20:39:39.259777   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0130 20:39:40.291441   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:42.292550   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:39.861179   45819 retry.go:31] will retry after 1.194254513s: kubelet not initialised
	I0130 20:39:41.067315   45819 retry.go:31] will retry after 3.766341089s: kubelet not initialised
	I0130 20:39:42.016919   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:44.514681   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:43.804203   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.54440472s)
	I0130 20:39:43.804228   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0130 20:39:43.804262   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:43.804360   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 20:39:44.790577   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:46.791751   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:44.839501   45819 retry.go:31] will retry after 2.957753887s: kubelet not initialised
	I0130 20:39:47.802749   45819 retry.go:31] will retry after 4.750837771s: kubelet not initialised
	I0130 20:39:47.016112   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:49.517716   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:46.385349   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.580960989s)
	I0130 20:39:46.385378   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0130 20:39:46.385403   44923 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 20:39:46.385446   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 20:39:48.570468   44923 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.184994355s)
	I0130 20:39:48.570504   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0130 20:39:48.570529   44923 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0130 20:39:48.570575   44923 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0130 20:39:49.318398   44923 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18007-4458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0130 20:39:49.318449   44923 cache_images.go:123] Successfully loaded all cached images
	I0130 20:39:49.318457   44923 cache_images.go:92] LoadImages completed in 17.342728639s
	I0130 20:39:49.318542   44923 ssh_runner.go:195] Run: crio config
	I0130 20:39:49.393074   44923 cni.go:84] Creating CNI manager for ""
	I0130 20:39:49.393094   44923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:39:49.393116   44923 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:39:49.393143   44923 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.220 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-473743 NodeName:no-preload-473743 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:39:49.393301   44923 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-473743"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:39:49.393384   44923 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-473743 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-473743 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:39:49.393445   44923 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0130 20:39:49.403506   44923 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:39:49.403582   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:39:49.412473   44923 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0130 20:39:49.429600   44923 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0130 20:39:49.445613   44923 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0130 20:39:49.462906   44923 ssh_runner.go:195] Run: grep 192.168.50.220	control-plane.minikube.internal$ /etc/hosts
	I0130 20:39:49.466844   44923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:39:49.479363   44923 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743 for IP: 192.168.50.220
	I0130 20:39:49.479388   44923 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:39:49.479540   44923 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:39:49.479599   44923 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:39:49.479682   44923 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.key
	I0130 20:39:49.479766   44923 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/apiserver.key.ef9da43a
	I0130 20:39:49.479832   44923 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/proxy-client.key
	I0130 20:39:49.479984   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:39:49.480020   44923 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:39:49.480031   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:39:49.480052   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:39:49.480082   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:39:49.480104   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:39:49.480148   44923 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:39:49.480782   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:39:49.504588   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 20:39:49.530340   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:39:49.552867   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 20:39:49.575974   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:39:49.598538   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:39:49.623489   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:39:49.646965   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:39:49.671998   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:39:49.695493   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:39:49.718975   44923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:39:49.741793   44923 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:39:49.758291   44923 ssh_runner.go:195] Run: openssl version
	I0130 20:39:49.765053   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:39:49.775428   44923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:39:49.780081   44923 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:39:49.780130   44923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:39:49.785510   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:39:49.797983   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:39:49.807934   44923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:39:49.812367   44923 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:39:49.812416   44923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:39:49.818021   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:39:49.827603   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:39:49.837248   44923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:49.841789   44923 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:49.841838   44923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:39:49.847684   44923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:39:49.857387   44923 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:39:49.862411   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:39:49.871862   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:39:49.877904   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:39:49.883820   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:39:49.890534   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:39:49.898143   44923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:39:49.905607   44923 kubeadm.go:404] StartCluster: {Name:no-preload-473743 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-473743 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.220 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:39:49.905713   44923 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:39:49.905768   44923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:39:49.956631   44923 cri.go:89] found id: ""
	I0130 20:39:49.956705   44923 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:39:49.967500   44923 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:39:49.967516   44923 kubeadm.go:636] restartCluster start
	I0130 20:39:49.967572   44923 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:39:49.977077   44923 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:49.978191   44923 kubeconfig.go:92] found "no-preload-473743" server: "https://192.168.50.220:8443"
	I0130 20:39:49.980732   44923 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:39:49.990334   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:49.990377   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:50.001427   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:50.491017   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:50.491080   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:50.503162   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:48.792438   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:51.290002   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:53.291511   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:52.558586   45819 retry.go:31] will retry after 13.209460747s: kubelet not initialised
	I0130 20:39:52.013659   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:54.013756   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:50.991212   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:50.991312   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:51.004155   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:51.491296   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:51.491369   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:51.502771   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:51.991398   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:51.991529   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:52.004164   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:52.490700   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:52.490817   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:52.504616   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:52.991009   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:52.991101   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:53.004178   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:53.490804   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:53.490897   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:53.502856   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:53.990345   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:53.990451   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:54.003812   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:54.491414   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:54.491522   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:54.502969   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:54.991126   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:54.991212   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:55.003001   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:55.490521   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:55.490609   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:55.501901   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:55.791198   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:58.289750   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:56.513098   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:58.514459   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:39:55.990820   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:55.990893   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:56.002224   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:56.490338   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:56.490432   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:56.502497   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:56.991097   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:56.991189   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:57.002115   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:57.490604   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:57.490681   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:57.501777   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:57.991320   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:57.991419   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:58.002136   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:58.490641   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:58.490724   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:58.502247   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:58.990830   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:58.990951   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:59.001469   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:59.491109   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:59.491223   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:39:59.502348   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:39:59.991097   44923 api_server.go:166] Checking apiserver status ...
	I0130 20:39:59.991182   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:40:00.002945   44923 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:40:00.002978   44923 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:40:00.002986   44923 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:40:00.002996   44923 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:40:00.003068   44923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:40:00.045168   44923 cri.go:89] found id: ""
	I0130 20:40:00.045245   44923 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:40:00.061704   44923 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:40:00.074448   44923 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:40:00.074505   44923 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:40:00.083478   44923 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:40:00.083502   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:00.200934   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:00.791680   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:02.791880   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:00.515342   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:02.515914   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:05.014585   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:00.824616   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:01.029317   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:01.146596   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:01.232362   44923 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:40:01.232439   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:01.733118   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:02.232964   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:02.732910   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:03.232934   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:03.732852   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:40:03.758730   44923 api_server.go:72] duration metric: took 2.526367424s to wait for apiserver process to appear ...
	I0130 20:40:03.758768   44923 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:40:03.758786   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:05.290228   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:07.290842   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:07.869847   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 20:40:07.869897   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 20:40:07.869919   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:07.986795   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:40:07.986841   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:40:08.259140   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:08.265487   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:40:08.265523   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:40:08.759024   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:08.764138   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 20:40:08.764163   44923 api_server.go:103] status: https://192.168.50.220:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 20:40:09.259821   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:40:09.265120   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 200:
	ok
	I0130 20:40:09.275933   44923 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 20:40:09.275956   44923 api_server.go:131] duration metric: took 5.517181599s to wait for apiserver health ...
	I0130 20:40:09.275965   44923 cni.go:84] Creating CNI manager for ""
	I0130 20:40:09.275971   44923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:40:09.277688   44923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:40:05.773670   45819 retry.go:31] will retry after 17.341210204s: kubelet not initialised
	I0130 20:40:07.014677   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:09.516836   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:09.279058   44923 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:40:09.307862   44923 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:40:09.339259   44923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:40:09.355136   44923 system_pods.go:59] 8 kube-system pods found
	I0130 20:40:09.355177   44923 system_pods.go:61] "coredns-76f75df574-d4c7t" [a8701b4d-0616-4c05-9ba0-0157adae2d13] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 20:40:09.355185   44923 system_pods.go:61] "etcd-no-preload-473743" [ed931ab3-95d8-4115-ae97-1c274ed8432d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 20:40:09.355194   44923 system_pods.go:61] "kube-apiserver-no-preload-473743" [64b9b17c-6df5-41db-a308-b0deba016c9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 20:40:09.355201   44923 system_pods.go:61] "kube-controller-manager-no-preload-473743" [a28d8dc6-244a-4dfa-9d7f-468281823332] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 20:40:09.355210   44923 system_pods.go:61] "kube-proxy-zklzt" [fa94d19c-b0d6-4e78-86e8-e6b5f3608753] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 20:40:09.355219   44923 system_pods.go:61] "kube-scheduler-no-preload-473743" [b8f8066b-8644-42c3-b47a-52e34210e410] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 20:40:09.355238   44923 system_pods.go:61] "metrics-server-57f55c9bc5-wzb2g" [cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:40:09.355249   44923 system_pods.go:61] "storage-provisioner" [a257b079-cb6e-45fd-b05d-9ad6fa26225e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:40:09.355256   44923 system_pods.go:74] duration metric: took 15.951624ms to wait for pod list to return data ...
	I0130 20:40:09.355277   44923 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:40:09.361985   44923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:40:09.362014   44923 node_conditions.go:123] node cpu capacity is 2
	I0130 20:40:09.362025   44923 node_conditions.go:105] duration metric: took 6.74245ms to run NodePressure ...
	I0130 20:40:09.362045   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:40:09.678111   44923 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 20:40:09.687808   44923 kubeadm.go:787] kubelet initialised
	I0130 20:40:09.687828   44923 kubeadm.go:788] duration metric: took 9.689086ms waiting for restarted kubelet to initialise ...
	I0130 20:40:09.687835   44923 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:09.694574   44923 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.700190   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "coredns-76f75df574-d4c7t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.700214   44923 pod_ready.go:81] duration metric: took 5.613522ms waiting for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.700230   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "coredns-76f75df574-d4c7t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.700237   44923 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.705513   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "etcd-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.705534   44923 pod_ready.go:81] duration metric: took 5.286859ms waiting for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.705545   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "etcd-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.705553   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.710360   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-apiserver-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.710378   44923 pod_ready.go:81] duration metric: took 4.814631ms waiting for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.710388   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-apiserver-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.710396   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:09.746412   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.746447   44923 pod_ready.go:81] duration metric: took 36.037006ms waiting for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:09.746460   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:09.746469   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:10.143330   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-proxy-zklzt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.143364   44923 pod_ready.go:81] duration metric: took 396.879081ms waiting for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:10.143377   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-proxy-zklzt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.143385   44923 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:10.549132   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "kube-scheduler-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.549171   44923 pod_ready.go:81] duration metric: took 405.77755ms waiting for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:10.549192   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "kube-scheduler-no-preload-473743" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.549201   44923 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:10.942777   44923 pod_ready.go:97] node "no-preload-473743" hosting pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.942802   44923 pod_ready.go:81] duration metric: took 393.589996ms waiting for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	E0130 20:40:10.942811   44923 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-473743" hosting pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:10.942817   44923 pod_ready.go:38] duration metric: took 1.254975084s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:10.942834   44923 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:40:10.954894   44923 ops.go:34] apiserver oom_adj: -16
	I0130 20:40:10.954916   44923 kubeadm.go:640] restartCluster took 20.987393757s
	I0130 20:40:10.954926   44923 kubeadm.go:406] StartCluster complete in 21.049328159s
	I0130 20:40:10.954944   44923 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:40:10.955025   44923 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:40:10.956906   44923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:40:10.957249   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:40:10.957343   44923 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:40:10.957411   44923 addons.go:69] Setting storage-provisioner=true in profile "no-preload-473743"
	I0130 20:40:10.957434   44923 addons.go:234] Setting addon storage-provisioner=true in "no-preload-473743"
	I0130 20:40:10.957440   44923 addons.go:69] Setting metrics-server=true in profile "no-preload-473743"
	I0130 20:40:10.957447   44923 config.go:182] Loaded profile config "no-preload-473743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	W0130 20:40:10.957451   44923 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:40:10.957471   44923 addons.go:234] Setting addon metrics-server=true in "no-preload-473743"
	W0130 20:40:10.957481   44923 addons.go:243] addon metrics-server should already be in state true
	I0130 20:40:10.957512   44923 host.go:66] Checking if "no-preload-473743" exists ...
	I0130 20:40:10.957522   44923 host.go:66] Checking if "no-preload-473743" exists ...
	I0130 20:40:10.957946   44923 addons.go:69] Setting default-storageclass=true in profile "no-preload-473743"
	I0130 20:40:10.957911   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.958230   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.958246   44923 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-473743"
	I0130 20:40:10.958477   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.958517   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.958600   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.958621   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.962458   44923 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-473743" context rescaled to 1 replicas
	I0130 20:40:10.962497   44923 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.220 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:40:10.964710   44923 out.go:177] * Verifying Kubernetes components...
	I0130 20:40:10.966259   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:40:10.975195   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45125
	I0130 20:40:10.975661   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.976231   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.976262   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.976885   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.977509   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.977542   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.978199   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37815
	I0130 20:40:10.978220   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35309
	I0130 20:40:10.979039   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.979106   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.979581   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.979600   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.979584   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.979663   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.979964   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.980074   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.980160   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:10.980655   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.980690   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.984068   44923 addons.go:234] Setting addon default-storageclass=true in "no-preload-473743"
	W0130 20:40:10.984119   44923 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:40:10.984155   44923 host.go:66] Checking if "no-preload-473743" exists ...
	I0130 20:40:10.984564   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:10.984615   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:10.997126   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44921
	I0130 20:40:10.997598   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.997990   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.998006   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:10.998355   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:10.998520   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:10.998838   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37151
	I0130 20:40:10.999186   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:10.999589   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:10.999604   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:11.000003   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:11.000289   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:11.000433   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:40:11.002723   44923 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:40:11.001789   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:40:11.004317   44923 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:40:11.004329   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:40:11.004345   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:40:11.005791   44923 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:40:11.007234   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:40:11.007246   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:40:11.007259   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:40:11.006415   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I0130 20:40:11.007375   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.007826   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:11.008219   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:40:11.008258   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.008405   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:40:11.008550   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:11.008566   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:11.008624   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:40:11.008780   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:40:11.008900   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:11.008904   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:40:11.009548   44923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:40:11.009578   44923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:40:11.010414   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.010713   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:40:11.010733   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.010938   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:40:11.011137   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:40:11.011308   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:40:11.011424   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:40:11.047889   44923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44097
	I0130 20:40:11.048317   44923 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:40:11.048800   44923 main.go:141] libmachine: Using API Version  1
	I0130 20:40:11.048820   44923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:40:11.049210   44923 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:40:11.049451   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetState
	I0130 20:40:11.051439   44923 main.go:141] libmachine: (no-preload-473743) Calling .DriverName
	I0130 20:40:11.052012   44923 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:40:11.052030   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:40:11.052049   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHHostname
	I0130 20:40:11.055336   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.055865   44923 main.go:141] libmachine: (no-preload-473743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:07:4a", ip: ""} in network mk-no-preload-473743: {Iface:virbr2 ExpiryTime:2024-01-30 21:39:19 +0000 UTC Type:0 Mac:52:54:00:c5:07:4a Iaid: IPaddr:192.168.50.220 Prefix:24 Hostname:no-preload-473743 Clientid:01:52:54:00:c5:07:4a}
	I0130 20:40:11.055888   44923 main.go:141] libmachine: (no-preload-473743) DBG | domain no-preload-473743 has defined IP address 192.168.50.220 and MAC address 52:54:00:c5:07:4a in network mk-no-preload-473743
	I0130 20:40:11.055976   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHPort
	I0130 20:40:11.056175   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHKeyPath
	I0130 20:40:11.056344   44923 main.go:141] libmachine: (no-preload-473743) Calling .GetSSHUsername
	I0130 20:40:11.056461   44923 sshutil.go:53] new ssh client: &{IP:192.168.50.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/no-preload-473743/id_rsa Username:docker}
	I0130 20:40:11.176670   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:40:11.176694   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:40:11.182136   44923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:40:11.194238   44923 node_ready.go:35] waiting up to 6m0s for node "no-preload-473743" to be "Ready" ...
	I0130 20:40:11.194301   44923 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0130 20:40:11.213877   44923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:40:11.222566   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:40:11.222591   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:40:11.264089   44923 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:40:11.264119   44923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:40:11.337758   44923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:40:12.237415   44923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.055244284s)
	I0130 20:40:12.237483   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.237482   44923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.023570997s)
	I0130 20:40:12.237504   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.237521   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.237538   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.237867   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.237927   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.237949   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.237964   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.237973   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.237986   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.238018   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.238030   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.238303   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.238319   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.238415   44923 main.go:141] libmachine: (no-preload-473743) DBG | Closing plugin on server side
	I0130 20:40:12.238473   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.238485   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.245407   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.245432   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.245653   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.245670   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.287632   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.287660   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.287973   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.287998   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.288000   44923 main.go:141] libmachine: (no-preload-473743) DBG | Closing plugin on server side
	I0130 20:40:12.288014   44923 main.go:141] libmachine: Making call to close driver server
	I0130 20:40:12.288024   44923 main.go:141] libmachine: (no-preload-473743) Calling .Close
	I0130 20:40:12.288266   44923 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:40:12.288286   44923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:40:12.288297   44923 addons.go:470] Verifying addon metrics-server=true in "no-preload-473743"
	I0130 20:40:12.288352   44923 main.go:141] libmachine: (no-preload-473743) DBG | Closing plugin on server side
	I0130 20:40:12.290298   44923 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 20:40:09.291762   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:11.791994   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:12.016265   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:14.515097   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:12.291867   44923 addons.go:505] enable addons completed in 1.334521495s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 20:40:13.200767   44923 node_ready.go:58] node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:15.699345   44923 node_ready.go:58] node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:14.291583   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:16.292248   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:17.014332   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:19.014556   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:18.198624   44923 node_ready.go:58] node "no-preload-473743" has status "Ready":"False"
	I0130 20:40:18.699015   44923 node_ready.go:49] node "no-preload-473743" has status "Ready":"True"
	I0130 20:40:18.699041   44923 node_ready.go:38] duration metric: took 7.504770144s waiting for node "no-preload-473743" to be "Ready" ...
	I0130 20:40:18.699050   44923 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:18.709647   44923 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.718022   44923 pod_ready.go:92] pod "coredns-76f75df574-d4c7t" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:18.718046   44923 pod_ready.go:81] duration metric: took 8.370541ms waiting for pod "coredns-76f75df574-d4c7t" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.718054   44923 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.722992   44923 pod_ready.go:92] pod "etcd-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:18.723012   44923 pod_ready.go:81] duration metric: took 4.951762ms waiting for pod "etcd-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:18.723020   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:20.732288   44923 pod_ready.go:102] pod "kube-apiserver-no-preload-473743" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:18.791445   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:21.290205   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:23.123817   45819 kubeadm.go:787] kubelet initialised
	I0130 20:40:23.123842   45819 kubeadm.go:788] duration metric: took 46.643164333s waiting for restarted kubelet to initialise ...
	I0130 20:40:23.123849   45819 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:40:23.128282   45819 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-547k4" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.132665   45819 pod_ready.go:92] pod "coredns-5644d7b6d9-547k4" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.132688   45819 pod_ready.go:81] duration metric: took 4.375362ms waiting for pod "coredns-5644d7b6d9-547k4" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.132701   45819 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-dtfzh" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.137072   45819 pod_ready.go:92] pod "coredns-5644d7b6d9-dtfzh" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.137092   45819 pod_ready.go:81] duration metric: took 4.379467ms waiting for pod "coredns-5644d7b6d9-dtfzh" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.137102   45819 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.142038   45819 pod_ready.go:92] pod "etcd-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.142058   45819 pod_ready.go:81] duration metric: took 4.949104ms waiting for pod "etcd-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.142070   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.146657   45819 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.146676   45819 pod_ready.go:81] duration metric: took 4.598238ms waiting for pod "kube-apiserver-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.146686   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.518159   45819 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.518189   45819 pod_ready.go:81] duration metric: took 371.488133ms waiting for pod "kube-controller-manager-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.518203   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ncl7z" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.919594   45819 pod_ready.go:92] pod "kube-proxy-ncl7z" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.919628   45819 pod_ready.go:81] duration metric: took 401.417322ms waiting for pod "kube-proxy-ncl7z" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.919644   45819 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:24.318125   45819 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-150971" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:24.318152   45819 pod_ready.go:81] duration metric: took 398.499457ms waiting for pod "kube-scheduler-old-k8s-version-150971" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:24.318166   45819 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.513600   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:23.514060   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:21.233466   44923 pod_ready.go:92] pod "kube-apiserver-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:21.233494   44923 pod_ready.go:81] duration metric: took 2.510466903s waiting for pod "kube-apiserver-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.233507   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.240688   44923 pod_ready.go:92] pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:21.240709   44923 pod_ready.go:81] duration metric: took 7.194165ms waiting for pod "kube-controller-manager-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.240721   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.248250   44923 pod_ready.go:92] pod "kube-proxy-zklzt" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:21.248271   44923 pod_ready.go:81] duration metric: took 7.542304ms waiting for pod "kube-proxy-zklzt" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:21.248278   44923 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.256673   44923 pod_ready.go:92] pod "kube-scheduler-no-preload-473743" in "kube-system" namespace has status "Ready":"True"
	I0130 20:40:23.256700   44923 pod_ready.go:81] duration metric: took 2.008414366s waiting for pod "kube-scheduler-no-preload-473743" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:23.256712   44923 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	I0130 20:40:25.263480   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:23.790334   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:26.290232   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:28.292270   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:26.324649   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:28.825120   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:26.016305   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:28.513650   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:27.264434   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:29.764240   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:30.793210   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:33.292255   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:31.326850   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:33.824698   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:30.514448   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:32.518435   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:35.013676   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:32.264144   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:34.763689   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:35.789964   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:37.791095   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:35.825018   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:38.326094   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:37.014222   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:39.517868   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:37.265137   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:39.764115   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:40.290332   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.290850   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:40.327135   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.824370   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.014917   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.516872   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:42.264387   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.265504   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.291131   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:46.790512   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:44.827108   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:47.327816   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:46.518922   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:49.014136   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:46.765151   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:49.265178   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:48.790952   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:51.291730   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:49.824442   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:52.325401   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:51.014513   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:53.518388   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:51.266567   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:53.764501   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:53.789915   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:55.790332   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:57.791445   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:54.825612   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:57.324364   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:59.327308   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:56.020804   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:58.515544   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:56.263707   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:58.264200   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:00.264261   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:40:59.792066   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:02.289879   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:01.824631   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:03.824749   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:01.014649   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:03.014805   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:05.017318   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:02.763825   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:04.764040   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:04.290927   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:06.791853   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:06.326570   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:08.824889   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:07.516190   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:10.018532   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:06.765257   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:09.263466   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:09.290744   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:11.791416   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:10.825025   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:13.324947   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:12.514850   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:14.522700   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:11.263911   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:13.763429   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:15.766371   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:14.289786   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:16.291753   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:15.325297   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:17.824762   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:17.014087   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:19.518139   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:18.263727   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:20.263854   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:18.791517   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:21.292155   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:19.825751   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:22.324733   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:21.518205   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:24.015562   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:22.767815   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:25.263283   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:23.790847   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:26.290464   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:24.824063   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:26.825938   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:29.325683   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:26.016724   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:28.514670   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:27.264429   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:29.264577   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:28.791861   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:31.291558   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:31.824367   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.824771   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:30.515432   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.014091   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:31.265902   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.764211   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:35.764788   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:33.791968   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:36.290991   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:38.291383   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:35.824891   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:37.825500   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:35.514120   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:37.514579   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:39.516165   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:37.765006   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:40.263816   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:40.791224   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.792487   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:40.326148   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.825282   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.014531   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:44.514337   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:42.264845   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:44.764275   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:45.290370   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:47.790557   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:45.325184   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:47.825091   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:46.515035   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:49.013829   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:47.263752   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:49.263882   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:49.790715   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:52.291348   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:50.326963   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:52.825278   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:51.014381   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:53.016755   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:51.264167   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:53.264888   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:55.265000   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:54.291846   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:56.790351   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:55.325156   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:57.325446   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:59.326114   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:55.515866   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:58.013768   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:00.014052   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:57.763548   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:59.764374   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:41:58.790584   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:01.294420   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:01.827046   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:04.325425   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:02.514100   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:04.516981   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:02.264420   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:04.264851   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:03.790918   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:06.290560   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:08.291334   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:06.824232   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:08.824527   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:07.014375   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:09.513980   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:06.764222   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:09.264299   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:10.292477   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:12.795626   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:10.825706   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:13.325572   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:11.514369   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:14.016090   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:11.264881   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:13.763625   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:15.764616   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:15.290292   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:17.790263   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:15.326185   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:17.826504   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:16.518263   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:19.014219   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:18.265723   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:20.764663   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:19.792068   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:22.292221   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:20.325069   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:22.326307   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:21.014811   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:23.014876   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:25.017016   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:23.264098   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:25.267065   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:24.791616   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:27.291739   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:24.825416   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:26.826380   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:29.325717   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:27.513692   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:30.015246   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:27.763938   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:29.764135   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:29.789997   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:31.790272   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:31.825466   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:33.826959   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:32.513718   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:35.014948   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:31.780185   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:34.265062   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:33.790477   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:36.290139   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:38.291801   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:36.325475   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:38.825210   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:37.513778   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:39.518155   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:36.764137   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:38.765005   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:40.790050   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:42.791739   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:41.325239   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:43.826300   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:42.013844   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:44.014396   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:41.268687   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:43.765101   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:45.290120   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:47.291365   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:46.325321   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:48.824944   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:46.015721   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:48.514689   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:46.269498   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:48.763780   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:50.765289   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:49.790212   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:52.291090   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:51.324622   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:53.324873   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:51.015934   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:53.016171   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:52.765777   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:55.264419   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:54.292666   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:56.790098   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:55.825230   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:58.324546   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:55.514240   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:58.014796   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:57.764094   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:59.764594   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:42:58.790445   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:00.790844   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:03.290632   45037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:00.325916   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:02.824174   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:00.514203   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:02.515317   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:05.018840   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:01.767672   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:04.263736   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:04.290221   45037 pod_ready.go:81] duration metric: took 4m0.006974938s waiting for pod "metrics-server-57f55c9bc5-ghg9n" in "kube-system" namespace to be "Ready" ...
	E0130 20:43:04.290244   45037 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 20:43:04.290252   45037 pod_ready.go:38] duration metric: took 4m4.550384705s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:43:04.290265   45037 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:43:04.290289   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:43:04.290330   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:43:04.354567   45037 cri.go:89] found id: "f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:04.354594   45037 cri.go:89] found id: ""
	I0130 20:43:04.354603   45037 logs.go:276] 1 containers: [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d]
	I0130 20:43:04.354664   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.359890   45037 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:43:04.359961   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:43:04.399415   45037 cri.go:89] found id: "0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:04.399437   45037 cri.go:89] found id: ""
	I0130 20:43:04.399444   45037 logs.go:276] 1 containers: [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18]
	I0130 20:43:04.399484   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.404186   45037 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:43:04.404241   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:43:04.445968   45037 cri.go:89] found id: "4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:04.445994   45037 cri.go:89] found id: ""
	I0130 20:43:04.446003   45037 logs.go:276] 1 containers: [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d]
	I0130 20:43:04.446060   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.450215   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:43:04.450285   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:43:04.492438   45037 cri.go:89] found id: "74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:04.492462   45037 cri.go:89] found id: ""
	I0130 20:43:04.492476   45037 logs.go:276] 1 containers: [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f]
	I0130 20:43:04.492537   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.497227   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:43:04.497301   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:43:04.535936   45037 cri.go:89] found id: "cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:04.535960   45037 cri.go:89] found id: ""
	I0130 20:43:04.535970   45037 logs.go:276] 1 containers: [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254]
	I0130 20:43:04.536026   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.540968   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:43:04.541046   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:43:04.584192   45037 cri.go:89] found id: "b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:04.584214   45037 cri.go:89] found id: ""
	I0130 20:43:04.584222   45037 logs.go:276] 1 containers: [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2]
	I0130 20:43:04.584280   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.588842   45037 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:43:04.588914   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:43:04.630957   45037 cri.go:89] found id: ""
	I0130 20:43:04.630984   45037 logs.go:276] 0 containers: []
	W0130 20:43:04.630994   45037 logs.go:278] No container was found matching "kindnet"
	I0130 20:43:04.631000   45037 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:43:04.631057   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:43:04.672712   45037 cri.go:89] found id: "84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:04.672741   45037 cri.go:89] found id: "5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:04.672747   45037 cri.go:89] found id: ""
	I0130 20:43:04.672757   45037 logs.go:276] 2 containers: [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5]
	I0130 20:43:04.672830   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.677537   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:04.681806   45037 logs.go:123] Gathering logs for kube-scheduler [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f] ...
	I0130 20:43:04.681833   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:04.743389   45037 logs.go:123] Gathering logs for kube-proxy [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254] ...
	I0130 20:43:04.743420   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:04.783857   45037 logs.go:123] Gathering logs for etcd [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18] ...
	I0130 20:43:04.783891   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:04.838800   45037 logs.go:123] Gathering logs for container status ...
	I0130 20:43:04.838827   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:43:04.897337   45037 logs.go:123] Gathering logs for kubelet ...
	I0130 20:43:04.897361   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:43:04.954337   45037 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:43:04.954367   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:43:05.110447   45037 logs.go:123] Gathering logs for kube-controller-manager [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2] ...
	I0130 20:43:05.110476   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:05.169238   45037 logs.go:123] Gathering logs for storage-provisioner [5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5] ...
	I0130 20:43:05.169275   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:05.209860   45037 logs.go:123] Gathering logs for dmesg ...
	I0130 20:43:05.209890   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:43:05.224272   45037 logs.go:123] Gathering logs for coredns [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d] ...
	I0130 20:43:05.224296   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:05.264818   45037 logs.go:123] Gathering logs for storage-provisioner [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac] ...
	I0130 20:43:05.264857   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:05.304626   45037 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:43:05.304657   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:43:05.748336   45037 logs.go:123] Gathering logs for kube-apiserver [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d] ...
	I0130 20:43:05.748377   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:08.306639   45037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:43:08.324001   45037 api_server.go:72] duration metric: took 4m16.400279002s to wait for apiserver process to appear ...
	I0130 20:43:08.324028   45037 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:43:08.324061   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:43:08.324111   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:43:08.364000   45037 cri.go:89] found id: "f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:08.364026   45037 cri.go:89] found id: ""
	I0130 20:43:08.364036   45037 logs.go:276] 1 containers: [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d]
	I0130 20:43:08.364093   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.368770   45037 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:43:08.368843   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:43:08.411371   45037 cri.go:89] found id: "0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:08.411394   45037 cri.go:89] found id: ""
	I0130 20:43:08.411404   45037 logs.go:276] 1 containers: [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18]
	I0130 20:43:08.411462   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.415582   45037 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:43:08.415648   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:43:08.455571   45037 cri.go:89] found id: "4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:08.455601   45037 cri.go:89] found id: ""
	I0130 20:43:08.455612   45037 logs.go:276] 1 containers: [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d]
	I0130 20:43:08.455662   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.459908   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:43:08.459972   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:43:08.497350   45037 cri.go:89] found id: "74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:08.497374   45037 cri.go:89] found id: ""
	I0130 20:43:08.497383   45037 logs.go:276] 1 containers: [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f]
	I0130 20:43:08.497441   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.501504   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:43:08.501552   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:43:08.550031   45037 cri.go:89] found id: "cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:08.550057   45037 cri.go:89] found id: ""
	I0130 20:43:08.550066   45037 logs.go:276] 1 containers: [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254]
	I0130 20:43:08.550181   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.555166   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:43:08.555215   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:43:08.590903   45037 cri.go:89] found id: "b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:08.590929   45037 cri.go:89] found id: ""
	I0130 20:43:08.590939   45037 logs.go:276] 1 containers: [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2]
	I0130 20:43:08.590997   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.594837   45037 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:43:08.594888   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:43:08.630989   45037 cri.go:89] found id: ""
	I0130 20:43:08.631015   45037 logs.go:276] 0 containers: []
	W0130 20:43:08.631024   45037 logs.go:278] No container was found matching "kindnet"
	I0130 20:43:08.631029   45037 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:43:08.631072   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:43:08.669579   45037 cri.go:89] found id: "84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:08.669603   45037 cri.go:89] found id: "5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:08.669609   45037 cri.go:89] found id: ""
	I0130 20:43:08.669617   45037 logs.go:276] 2 containers: [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5]
	I0130 20:43:08.669666   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.673938   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:08.677733   45037 logs.go:123] Gathering logs for kubelet ...
	I0130 20:43:08.677757   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:43:08.726492   45037 logs.go:123] Gathering logs for dmesg ...
	I0130 20:43:08.726519   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:43:04.825623   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:07.331997   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:07.514074   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:09.514484   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:06.264040   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:08.264505   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:10.764072   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:08.740624   45037 logs.go:123] Gathering logs for kube-controller-manager [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2] ...
	I0130 20:43:08.740645   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:08.792517   45037 logs.go:123] Gathering logs for kube-scheduler [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f] ...
	I0130 20:43:08.792547   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:08.829131   45037 logs.go:123] Gathering logs for storage-provisioner [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac] ...
	I0130 20:43:08.829166   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:08.870777   45037 logs.go:123] Gathering logs for storage-provisioner [5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5] ...
	I0130 20:43:08.870802   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:08.909648   45037 logs.go:123] Gathering logs for kube-apiserver [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d] ...
	I0130 20:43:08.909678   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:08.953671   45037 logs.go:123] Gathering logs for coredns [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d] ...
	I0130 20:43:08.953701   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:08.989624   45037 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:43:08.989648   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:43:09.383141   45037 logs.go:123] Gathering logs for container status ...
	I0130 20:43:09.383174   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:43:09.442685   45037 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:43:09.442719   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:43:09.563370   45037 logs.go:123] Gathering logs for etcd [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18] ...
	I0130 20:43:09.563398   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:09.614390   45037 logs.go:123] Gathering logs for kube-proxy [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254] ...
	I0130 20:43:09.614422   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:12.156906   45037 api_server.go:253] Checking apiserver healthz at https://192.168.61.63:8443/healthz ...
	I0130 20:43:12.161980   45037 api_server.go:279] https://192.168.61.63:8443/healthz returned 200:
	ok
	I0130 20:43:12.163284   45037 api_server.go:141] control plane version: v1.28.4
	I0130 20:43:12.163308   45037 api_server.go:131] duration metric: took 3.839271753s to wait for apiserver health ...
	I0130 20:43:12.163318   45037 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:43:12.163343   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:43:12.163389   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:43:12.207351   45037 cri.go:89] found id: "f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:12.207372   45037 cri.go:89] found id: ""
	I0130 20:43:12.207381   45037 logs.go:276] 1 containers: [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d]
	I0130 20:43:12.207436   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.213923   45037 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:43:12.213987   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:43:12.263647   45037 cri.go:89] found id: "0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:12.263680   45037 cri.go:89] found id: ""
	I0130 20:43:12.263690   45037 logs.go:276] 1 containers: [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18]
	I0130 20:43:12.263743   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.268327   45037 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:43:12.268381   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:43:12.310594   45037 cri.go:89] found id: "4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:12.310614   45037 cri.go:89] found id: ""
	I0130 20:43:12.310622   45037 logs.go:276] 1 containers: [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d]
	I0130 20:43:12.310670   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.315134   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:43:12.315185   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:43:12.359384   45037 cri.go:89] found id: "74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:12.359404   45037 cri.go:89] found id: ""
	I0130 20:43:12.359412   45037 logs.go:276] 1 containers: [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f]
	I0130 20:43:12.359468   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.363796   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:43:12.363856   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:43:12.399741   45037 cri.go:89] found id: "cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:12.399771   45037 cri.go:89] found id: ""
	I0130 20:43:12.399783   45037 logs.go:276] 1 containers: [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254]
	I0130 20:43:12.399844   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.404237   45037 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:43:12.404302   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:43:12.457772   45037 cri.go:89] found id: "b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:12.457806   45037 cri.go:89] found id: ""
	I0130 20:43:12.457816   45037 logs.go:276] 1 containers: [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2]
	I0130 20:43:12.457876   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.462316   45037 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:43:12.462378   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:43:12.499660   45037 cri.go:89] found id: ""
	I0130 20:43:12.499690   45037 logs.go:276] 0 containers: []
	W0130 20:43:12.499699   45037 logs.go:278] No container was found matching "kindnet"
	I0130 20:43:12.499707   45037 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:43:12.499763   45037 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:43:12.548931   45037 cri.go:89] found id: "84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:12.548961   45037 cri.go:89] found id: "5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:12.548969   45037 cri.go:89] found id: ""
	I0130 20:43:12.548978   45037 logs.go:276] 2 containers: [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5]
	I0130 20:43:12.549037   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.552983   45037 ssh_runner.go:195] Run: which crictl
	I0130 20:43:12.557322   45037 logs.go:123] Gathering logs for container status ...
	I0130 20:43:12.557340   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:43:12.599784   45037 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:43:12.599812   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:43:12.716124   45037 logs.go:123] Gathering logs for kube-apiserver [f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d] ...
	I0130 20:43:12.716156   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b510da3b115a8c39b488fb87ba1f13319fd6d670a9a7d0879ebd9ac24ffb2d"
	I0130 20:43:12.766940   45037 logs.go:123] Gathering logs for storage-provisioner [5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5] ...
	I0130 20:43:12.766980   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dbd1a278b4953f791b68b49d4109929dceeb105df0c12d09c3c2699608343c5"
	I0130 20:43:12.804026   45037 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:43:12.804059   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:43:13.165109   45037 logs.go:123] Gathering logs for coredns [4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d] ...
	I0130 20:43:13.165153   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c08f1c12145a7e202ba09f3a08fe7ea8eff480c08d2bc5f918076433c071c3d"
	I0130 20:43:13.204652   45037 logs.go:123] Gathering logs for kube-scheduler [74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f] ...
	I0130 20:43:13.204679   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74b99df1e69b6e48d0530cb34943cdffc6afec4546e2bb54e7acaae2c78f824f"
	I0130 20:43:13.242644   45037 logs.go:123] Gathering logs for storage-provisioner [84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac] ...
	I0130 20:43:13.242675   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ab3bb4fc32793ccb2661d8e933bd0ba2a039eb3fa9cf94570034ff1f9ffcac"
	I0130 20:43:13.282527   45037 logs.go:123] Gathering logs for kubelet ...
	I0130 20:43:13.282558   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:43:13.335128   45037 logs.go:123] Gathering logs for etcd [0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18] ...
	I0130 20:43:13.335156   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0684f62c32df060a7806afb8034b4eb9e5b98b9c4c179c6d97f9974dfe649f18"
	I0130 20:43:13.385564   45037 logs.go:123] Gathering logs for kube-controller-manager [b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2] ...
	I0130 20:43:13.385599   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b53924cf08f0c54689609e5e63bce69fbd3993bca6cf646049a023c24a0dbfa2"
	I0130 20:43:13.449564   45037 logs.go:123] Gathering logs for dmesg ...
	I0130 20:43:13.449603   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:43:13.464376   45037 logs.go:123] Gathering logs for kube-proxy [cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254] ...
	I0130 20:43:13.464406   45037 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cceda50230a0f6a5ee347c2e166d2005856f318c517e7fd77af5c92c1ef31254"
	I0130 20:43:09.825882   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:11.827628   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:14.325309   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:12.012894   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:14.014496   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:12.765167   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:14.765356   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:16.017083   45037 system_pods.go:59] 8 kube-system pods found
	I0130 20:43:16.017121   45037 system_pods.go:61] "coredns-5dd5756b68-jqzzv" [59f362b6-606e-4bcd-b5eb-c8822aaf8b9c] Running
	I0130 20:43:16.017128   45037 system_pods.go:61] "etcd-embed-certs-208583" [798094bf-2aac-4f39-afc1-4f873bdd08ee] Running
	I0130 20:43:16.017135   45037 system_pods.go:61] "kube-apiserver-embed-certs-208583" [b96b9f6e-b36a-47bf-8f6d-01f883501766] Running
	I0130 20:43:16.017141   45037 system_pods.go:61] "kube-controller-manager-embed-certs-208583" [3dbd9e29-5c95-40f5-acd8-9767f6ce7a03] Running
	I0130 20:43:16.017148   45037 system_pods.go:61] "kube-proxy-g7q5t" [47f109e0-7a56-472f-8c7e-ba2b138de352] Running
	I0130 20:43:16.017154   45037 system_pods.go:61] "kube-scheduler-embed-certs-208583" [e8a37eb1-599f-478f-bbc1-b44b1020f291] Running
	I0130 20:43:16.017165   45037 system_pods.go:61] "metrics-server-57f55c9bc5-ghg9n" [37700115-83e9-440a-b396-56f50adb6311] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:43:16.017172   45037 system_pods.go:61] "storage-provisioner" [15108916-a630-4208-99f7-5706db407b22] Running
	I0130 20:43:16.017185   45037 system_pods.go:74] duration metric: took 3.853859786s to wait for pod list to return data ...
	I0130 20:43:16.017198   45037 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:43:16.019949   45037 default_sa.go:45] found service account: "default"
	I0130 20:43:16.019967   45037 default_sa.go:55] duration metric: took 2.760881ms for default service account to be created ...
	I0130 20:43:16.019976   45037 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:43:16.025198   45037 system_pods.go:86] 8 kube-system pods found
	I0130 20:43:16.025219   45037 system_pods.go:89] "coredns-5dd5756b68-jqzzv" [59f362b6-606e-4bcd-b5eb-c8822aaf8b9c] Running
	I0130 20:43:16.025225   45037 system_pods.go:89] "etcd-embed-certs-208583" [798094bf-2aac-4f39-afc1-4f873bdd08ee] Running
	I0130 20:43:16.025229   45037 system_pods.go:89] "kube-apiserver-embed-certs-208583" [b96b9f6e-b36a-47bf-8f6d-01f883501766] Running
	I0130 20:43:16.025234   45037 system_pods.go:89] "kube-controller-manager-embed-certs-208583" [3dbd9e29-5c95-40f5-acd8-9767f6ce7a03] Running
	I0130 20:43:16.025238   45037 system_pods.go:89] "kube-proxy-g7q5t" [47f109e0-7a56-472f-8c7e-ba2b138de352] Running
	I0130 20:43:16.025242   45037 system_pods.go:89] "kube-scheduler-embed-certs-208583" [e8a37eb1-599f-478f-bbc1-b44b1020f291] Running
	I0130 20:43:16.025248   45037 system_pods.go:89] "metrics-server-57f55c9bc5-ghg9n" [37700115-83e9-440a-b396-56f50adb6311] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:43:16.025258   45037 system_pods.go:89] "storage-provisioner" [15108916-a630-4208-99f7-5706db407b22] Running
	I0130 20:43:16.025264   45037 system_pods.go:126] duration metric: took 5.282813ms to wait for k8s-apps to be running ...
	I0130 20:43:16.025270   45037 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:43:16.025309   45037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:43:16.043415   45037 system_svc.go:56] duration metric: took 18.134458ms WaitForService to wait for kubelet.
	I0130 20:43:16.043443   45037 kubeadm.go:581] duration metric: took 4m24.119724167s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:43:16.043472   45037 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:43:16.046999   45037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:43:16.047021   45037 node_conditions.go:123] node cpu capacity is 2
	I0130 20:43:16.047035   45037 node_conditions.go:105] duration metric: took 3.556321ms to run NodePressure ...
	I0130 20:43:16.047048   45037 start.go:228] waiting for startup goroutines ...
	I0130 20:43:16.047061   45037 start.go:233] waiting for cluster config update ...
	I0130 20:43:16.047078   45037 start.go:242] writing updated cluster config ...
	I0130 20:43:16.047368   45037 ssh_runner.go:195] Run: rm -f paused
	I0130 20:43:16.098760   45037 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 20:43:16.100739   45037 out.go:177] * Done! kubectl is now configured to use "embed-certs-208583" cluster and "default" namespace by default
	I0130 20:43:16.326450   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:18.824456   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:16.514335   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:19.014528   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:17.264059   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:19.264543   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:20.824649   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:23.324731   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:21.014634   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:23.513609   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:21.763771   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:23.764216   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:25.325575   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:27.825708   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:25.514335   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:27.506991   45441 pod_ready.go:81] duration metric: took 4m0.000368672s waiting for pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace to be "Ready" ...
	E0130 20:43:27.507020   45441 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hzfwg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 20:43:27.507037   45441 pod_ready.go:38] duration metric: took 4m11.059827725s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:43:27.507060   45441 kubeadm.go:640] restartCluster took 4m33.680532974s
	W0130 20:43:27.507128   45441 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 20:43:27.507159   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 20:43:26.264077   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:28.264502   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:30.764952   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:30.325157   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:32.325570   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:32.766530   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:35.264541   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:34.825545   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:36.825757   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:38.825922   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:37.764613   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:39.772391   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:41.253066   45441 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.745883202s)
	I0130 20:43:41.253138   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:43:41.267139   45441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:43:41.276814   45441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:43:41.286633   45441 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:43:41.286678   45441 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 20:43:41.340190   45441 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0130 20:43:41.340255   45441 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 20:43:41.491373   45441 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 20:43:41.491524   45441 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 20:43:41.491644   45441 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 20:43:41.735659   45441 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 20:43:41.737663   45441 out.go:204]   - Generating certificates and keys ...
	I0130 20:43:41.737778   45441 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 20:43:41.737875   45441 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 20:43:41.737961   45441 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 20:43:41.738034   45441 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 20:43:41.738116   45441 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 20:43:41.738215   45441 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 20:43:41.738295   45441 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 20:43:41.738381   45441 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 20:43:41.738481   45441 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 20:43:41.738542   45441 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 20:43:41.738578   45441 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 20:43:41.738633   45441 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 20:43:41.894828   45441 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 20:43:42.122408   45441 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 20:43:42.406185   45441 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 20:43:42.526794   45441 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 20:43:42.527473   45441 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 20:43:42.529906   45441 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 20:43:40.826403   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:43.324650   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:42.531956   45441 out.go:204]   - Booting up control plane ...
	I0130 20:43:42.532077   45441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 20:43:42.532175   45441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 20:43:42.532276   45441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 20:43:42.550440   45441 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 20:43:42.551432   45441 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 20:43:42.551515   45441 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 20:43:42.666449   45441 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 20:43:42.265430   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:44.268768   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:45.325121   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:47.325585   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:46.768728   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:49.264313   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:50.670814   45441 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004172 seconds
	I0130 20:43:50.670940   45441 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 20:43:50.693878   45441 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 20:43:51.228257   45441 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 20:43:51.228498   45441 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-877742 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 20:43:51.743336   45441 kubeadm.go:322] [bootstrap-token] Using token: hhyk9t.fiwckj4dbaljm18s
	I0130 20:43:51.744898   45441 out.go:204]   - Configuring RBAC rules ...
	I0130 20:43:51.744996   45441 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 20:43:51.755911   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 20:43:51.769124   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 20:43:51.773192   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 20:43:51.776643   45441 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 20:43:51.780261   45441 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 20:43:51.807541   45441 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 20:43:52.070376   45441 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 20:43:52.167958   45441 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 20:43:52.167994   45441 kubeadm.go:322] 
	I0130 20:43:52.168050   45441 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 20:43:52.168061   45441 kubeadm.go:322] 
	I0130 20:43:52.168142   45441 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 20:43:52.168157   45441 kubeadm.go:322] 
	I0130 20:43:52.168193   45441 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 20:43:52.168254   45441 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 20:43:52.168325   45441 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 20:43:52.168336   45441 kubeadm.go:322] 
	I0130 20:43:52.168399   45441 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 20:43:52.168409   45441 kubeadm.go:322] 
	I0130 20:43:52.168469   45441 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 20:43:52.168480   45441 kubeadm.go:322] 
	I0130 20:43:52.168546   45441 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 20:43:52.168639   45441 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 20:43:52.168731   45441 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 20:43:52.168741   45441 kubeadm.go:322] 
	I0130 20:43:52.168834   45441 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 20:43:52.168928   45441 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 20:43:52.168938   45441 kubeadm.go:322] 
	I0130 20:43:52.169033   45441 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token hhyk9t.fiwckj4dbaljm18s \
	I0130 20:43:52.169145   45441 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 \
	I0130 20:43:52.169175   45441 kubeadm.go:322] 	--control-plane 
	I0130 20:43:52.169185   45441 kubeadm.go:322] 
	I0130 20:43:52.169274   45441 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 20:43:52.169283   45441 kubeadm.go:322] 
	I0130 20:43:52.169374   45441 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token hhyk9t.fiwckj4dbaljm18s \
	I0130 20:43:52.169485   45441 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 
	I0130 20:43:52.170103   45441 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 20:43:52.170128   45441 cni.go:84] Creating CNI manager for ""
	I0130 20:43:52.170138   45441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:43:52.171736   45441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:43:49.827004   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:51.828301   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:54.324951   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:52.173096   45441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:43:52.207763   45441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:43:52.239391   45441 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:43:52.239528   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:52.239550   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218 minikube.k8s.io/name=default-k8s-diff-port-877742 minikube.k8s.io/updated_at=2024_01_30T20_43_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:52.359837   45441 ops.go:34] apiserver oom_adj: -16
	I0130 20:43:52.622616   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:53.123165   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:53.622655   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:54.122819   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:54.623579   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:55.122784   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:51.265017   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:53.765449   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:56.826059   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:59.324992   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:55.622980   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:56.123436   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:56.623691   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:57.122685   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:57.623150   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:58.123358   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:58.623234   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:59.122804   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:59.623408   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:00.122730   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:43:56.264593   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:43:58.764827   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:00.765740   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:01.325185   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:03.325582   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:00.622649   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:01.123007   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:01.623488   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:02.123117   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:02.623186   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:03.122987   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:03.623625   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:04.123576   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:04.623493   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:05.123073   45441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:05.292330   45441 kubeadm.go:1088] duration metric: took 13.052870929s to wait for elevateKubeSystemPrivileges.
	I0130 20:44:05.292359   45441 kubeadm.go:406] StartCluster complete in 5m11.519002976s
	I0130 20:44:05.292376   45441 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:05.292446   45441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:44:05.294511   45441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:05.296490   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:44:05.296705   45441 config.go:182] Loaded profile config "default-k8s-diff-port-877742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:44:05.296739   45441 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:44:05.296797   45441 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-877742"
	I0130 20:44:05.296814   45441 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-877742"
	W0130 20:44:05.296823   45441 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:44:05.296872   45441 host.go:66] Checking if "default-k8s-diff-port-877742" exists ...
	I0130 20:44:05.297028   45441 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-877742"
	I0130 20:44:05.297068   45441 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-877742"
	I0130 20:44:05.297257   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.297282   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.297449   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.297476   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.297476   45441 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-877742"
	I0130 20:44:05.297498   45441 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-877742"
	W0130 20:44:05.297512   45441 addons.go:243] addon metrics-server should already be in state true
	I0130 20:44:05.297557   45441 host.go:66] Checking if "default-k8s-diff-port-877742" exists ...
	I0130 20:44:05.297934   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.297972   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.314618   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0130 20:44:05.314913   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34557
	I0130 20:44:05.315139   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.315638   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.315718   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.315751   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.316139   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.316295   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.316318   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.316342   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39221
	I0130 20:44:05.316649   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.316695   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.316729   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.316842   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.317131   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.317573   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.317598   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.317967   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.318507   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.318539   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.321078   45441 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-877742"
	W0130 20:44:05.321104   45441 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:44:05.321129   45441 host.go:66] Checking if "default-k8s-diff-port-877742" exists ...
	I0130 20:44:05.321503   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.321530   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.338144   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33785
	I0130 20:44:05.338180   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I0130 20:44:05.338717   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.338798   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.339318   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.339325   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.339343   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.339345   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.339804   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.339819   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.339987   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.340017   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.340889   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33925
	I0130 20:44:05.341348   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.341847   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.341870   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.342243   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:44:05.342328   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:44:05.344137   45441 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:44:05.342641   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.344745   45441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:05.345833   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:44:05.345871   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:44:05.345889   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:44:05.345936   45441 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:44:05.347567   45441 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:05.347585   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:44:05.347602   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:44:05.346048   45441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:05.348959   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.349635   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:44:05.349686   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.349853   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:44:05.350119   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:44:05.350404   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:44:05.350619   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:44:05.351435   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.351548   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:44:05.351565   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.351753   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:44:05.351924   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:44:05.352094   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:44:05.352237   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:44:05.366786   45441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40645
	I0130 20:44:05.367211   45441 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:05.367744   45441 main.go:141] libmachine: Using API Version  1
	I0130 20:44:05.367768   45441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:05.368174   45441 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:05.368435   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetState
	I0130 20:44:05.370411   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .DriverName
	I0130 20:44:05.370688   45441 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:05.370707   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:44:05.370726   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHHostname
	I0130 20:44:05.375681   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHPort
	I0130 20:44:05.375726   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.375758   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e0:0b", ip: ""} in network mk-default-k8s-diff-port-877742: {Iface:virbr4 ExpiryTime:2024-01-30 21:30:27 +0000 UTC Type:0 Mac:52:54:00:c4:e0:0b Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:default-k8s-diff-port-877742 Clientid:01:52:54:00:c4:e0:0b}
	I0130 20:44:05.375778   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | domain default-k8s-diff-port-877742 has defined IP address 192.168.72.52 and MAC address 52:54:00:c4:e0:0b in network mk-default-k8s-diff-port-877742
	I0130 20:44:05.375938   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHKeyPath
	I0130 20:44:05.376136   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .GetSSHUsername
	I0130 20:44:05.376324   45441 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/default-k8s-diff-port-877742/id_rsa Username:docker}
	I0130 20:44:03.263112   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:05.264610   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:05.536173   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 20:44:05.547763   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:44:05.547783   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:44:05.561439   45441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:05.589801   45441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:05.619036   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:44:05.619063   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:44:05.672972   45441 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:05.672993   45441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:44:05.753214   45441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:05.861799   45441 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-877742" context rescaled to 1 replicas
	I0130 20:44:05.861852   45441 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.52 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:44:05.863602   45441 out.go:177] * Verifying Kubernetes components...
	I0130 20:44:05.864716   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:07.418910   45441 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.882691784s)
	I0130 20:44:07.418945   45441 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0130 20:44:07.960063   45441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.370223433s)
	I0130 20:44:07.960120   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.960161   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.960158   45441 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.095417539s)
	I0130 20:44:07.960143   45441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.206889959s)
	I0130 20:44:07.960223   45441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.398756648s)
	I0130 20:44:07.960234   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.960247   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.960190   45441 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-877742" to be "Ready" ...
	I0130 20:44:07.960251   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.960319   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.961892   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.961892   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.961902   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.961919   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.961921   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.961902   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.961934   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.961936   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.961941   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.961944   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.961950   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.961955   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.961970   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.961980   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:07.961990   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:07.962309   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.962319   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.962340   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.962348   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.962350   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.962357   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.962380   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:07.962380   45441 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-877742"
	I0130 20:44:07.962420   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:07.962439   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:07.979672   45441 node_ready.go:49] node "default-k8s-diff-port-877742" has status "Ready":"True"
	I0130 20:44:07.979700   45441 node_ready.go:38] duration metric: took 19.437813ms waiting for node "default-k8s-diff-port-877742" to be "Ready" ...
	I0130 20:44:07.979713   45441 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:08.005989   45441 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:08.006020   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) Calling .Close
	I0130 20:44:08.006266   45441 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:08.006287   45441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:08.006286   45441 main.go:141] libmachine: (default-k8s-diff-port-877742) DBG | Closing plugin on server side
	I0130 20:44:08.008091   45441 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0130 20:44:05.329467   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:07.826212   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:08.009918   45441 addons.go:505] enable addons completed in 2.713172208s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0130 20:44:08.032478   45441 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tlb8h" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.539497   45441 pod_ready.go:92] pod "coredns-5dd5756b68-tlb8h" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.539527   45441 pod_ready.go:81] duration metric: took 1.50701275s waiting for pod "coredns-5dd5756b68-tlb8h" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.539537   45441 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.545068   45441 pod_ready.go:92] pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.545090   45441 pod_ready.go:81] duration metric: took 5.546681ms waiting for pod "etcd-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.545099   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.550794   45441 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.550817   45441 pod_ready.go:81] duration metric: took 5.711144ms waiting for pod "kube-apiserver-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.550829   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.556050   45441 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.556068   45441 pod_ready.go:81] duration metric: took 5.232882ms waiting for pod "kube-controller-manager-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.556076   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-59zvd" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.562849   45441 pod_ready.go:92] pod "kube-proxy-59zvd" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.562866   45441 pod_ready.go:81] duration metric: took 6.784197ms waiting for pod "kube-proxy-59zvd" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.562874   45441 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.965815   45441 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace has status "Ready":"True"
	I0130 20:44:09.965846   45441 pod_ready.go:81] duration metric: took 402.96387ms waiting for pod "kube-scheduler-default-k8s-diff-port-877742" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:09.965860   45441 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:07.265985   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:09.765494   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:10.326063   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:12.825921   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:11.974724   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:14.473879   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:12.265674   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:14.765546   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:15.325945   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:17.326041   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:16.974143   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:19.473552   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:16.765691   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:18.766995   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:19.824366   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:21.824919   45819 pod_ready.go:102] pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:24.318779   45819 pod_ready.go:81] duration metric: took 4m0.000598437s waiting for pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace to be "Ready" ...
	E0130 20:44:24.318808   45819 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-xt45r" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 20:44:24.318829   45819 pod_ready.go:38] duration metric: took 4m1.194970045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:24.318872   45819 kubeadm.go:640] restartCluster took 5m9.285235807s
	W0130 20:44:24.318943   45819 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 20:44:24.318974   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 20:44:21.973193   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:23.974160   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:21.263429   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:23.263586   44923 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:23.263609   44923 pod_ready.go:81] duration metric: took 4m0.006890289s waiting for pod "metrics-server-57f55c9bc5-wzb2g" in "kube-system" namespace to be "Ready" ...
	E0130 20:44:23.263618   44923 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 20:44:23.263625   44923 pod_ready.go:38] duration metric: took 4m4.564565945s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:23.263637   44923 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:44:23.263671   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:44:23.263711   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:44:23.319983   44923 cri.go:89] found id: "ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:23.320013   44923 cri.go:89] found id: ""
	I0130 20:44:23.320023   44923 logs.go:276] 1 containers: [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e]
	I0130 20:44:23.320078   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.325174   44923 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:44:23.325239   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:44:23.375914   44923 cri.go:89] found id: "b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:23.375952   44923 cri.go:89] found id: ""
	I0130 20:44:23.375960   44923 logs.go:276] 1 containers: [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901]
	I0130 20:44:23.376003   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.380265   44923 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:44:23.380324   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:44:23.428507   44923 cri.go:89] found id: "3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:23.428534   44923 cri.go:89] found id: ""
	I0130 20:44:23.428544   44923 logs.go:276] 1 containers: [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c]
	I0130 20:44:23.428591   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.434113   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:44:23.434184   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:44:23.522888   44923 cri.go:89] found id: "39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:23.522915   44923 cri.go:89] found id: ""
	I0130 20:44:23.522922   44923 logs.go:276] 1 containers: [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79]
	I0130 20:44:23.522964   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.534952   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:44:23.535015   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:44:23.576102   44923 cri.go:89] found id: "880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:23.576129   44923 cri.go:89] found id: ""
	I0130 20:44:23.576138   44923 logs.go:276] 1 containers: [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689]
	I0130 20:44:23.576185   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.580463   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:44:23.580527   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:44:23.620990   44923 cri.go:89] found id: "10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:23.621011   44923 cri.go:89] found id: ""
	I0130 20:44:23.621018   44923 logs.go:276] 1 containers: [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f]
	I0130 20:44:23.621069   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.625706   44923 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:44:23.625762   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:44:23.666341   44923 cri.go:89] found id: ""
	I0130 20:44:23.666368   44923 logs.go:276] 0 containers: []
	W0130 20:44:23.666378   44923 logs.go:278] No container was found matching "kindnet"
	I0130 20:44:23.666384   44923 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:44:23.666441   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:44:23.707229   44923 cri.go:89] found id: "e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:23.707248   44923 cri.go:89] found id: "748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:23.707252   44923 cri.go:89] found id: ""
	I0130 20:44:23.707258   44923 logs.go:276] 2 containers: [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446]
	I0130 20:44:23.707314   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.711242   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:23.715859   44923 logs.go:123] Gathering logs for kube-apiserver [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e] ...
	I0130 20:44:23.715883   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:23.775696   44923 logs.go:123] Gathering logs for storage-provisioner [748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446] ...
	I0130 20:44:23.775722   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:23.817767   44923 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:44:23.817796   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:44:24.301934   44923 logs.go:123] Gathering logs for container status ...
	I0130 20:44:24.301969   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:44:24.361236   44923 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:44:24.361265   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:44:24.511849   44923 logs.go:123] Gathering logs for kube-controller-manager [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f] ...
	I0130 20:44:24.511886   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:24.573648   44923 logs.go:123] Gathering logs for etcd [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901] ...
	I0130 20:44:24.573683   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:24.620572   44923 logs.go:123] Gathering logs for kubelet ...
	I0130 20:44:24.620608   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:44:24.687312   44923 logs.go:123] Gathering logs for dmesg ...
	I0130 20:44:24.687346   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:44:24.702224   44923 logs.go:123] Gathering logs for kube-proxy [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689] ...
	I0130 20:44:24.702262   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:24.749188   44923 logs.go:123] Gathering logs for storage-provisioner [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0] ...
	I0130 20:44:24.749218   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:24.793069   44923 logs.go:123] Gathering logs for coredns [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c] ...
	I0130 20:44:24.793093   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:24.829705   44923 logs.go:123] Gathering logs for kube-scheduler [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79] ...
	I0130 20:44:24.829730   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:29.263901   45819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.944900372s)
	I0130 20:44:29.263978   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:29.277198   45819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:44:29.286661   45819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:44:29.297088   45819 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:44:29.297129   45819 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0130 20:44:29.360347   45819 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0130 20:44:29.360446   45819 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 20:44:29.516880   45819 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 20:44:29.517075   45819 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 20:44:29.517217   45819 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 20:44:29.756175   45819 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 20:44:29.756323   45819 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 20:44:29.764820   45819 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0130 20:44:29.907654   45819 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 20:44:26.473595   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:28.473808   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:29.909307   45819 out.go:204]   - Generating certificates and keys ...
	I0130 20:44:29.909397   45819 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 20:44:29.909484   45819 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 20:44:29.909578   45819 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 20:44:29.909674   45819 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 20:44:29.909784   45819 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 20:44:29.909866   45819 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 20:44:29.909974   45819 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 20:44:29.910057   45819 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 20:44:29.910163   45819 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 20:44:29.910266   45819 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 20:44:29.910316   45819 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 20:44:29.910409   45819 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 20:44:29.974805   45819 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 20:44:30.281258   45819 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 20:44:30.605015   45819 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 20:44:30.782125   45819 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 20:44:30.783329   45819 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 20:44:27.369691   44923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:44:27.393279   44923 api_server.go:72] duration metric: took 4m16.430750077s to wait for apiserver process to appear ...
	I0130 20:44:27.393306   44923 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:44:27.393355   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:44:27.393434   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:44:27.443366   44923 cri.go:89] found id: "ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:27.443390   44923 cri.go:89] found id: ""
	I0130 20:44:27.443400   44923 logs.go:276] 1 containers: [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e]
	I0130 20:44:27.443457   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.448963   44923 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:44:27.449021   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:44:27.502318   44923 cri.go:89] found id: "b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:27.502341   44923 cri.go:89] found id: ""
	I0130 20:44:27.502348   44923 logs.go:276] 1 containers: [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901]
	I0130 20:44:27.502398   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.507295   44923 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:44:27.507352   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:44:27.548224   44923 cri.go:89] found id: "3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:27.548247   44923 cri.go:89] found id: ""
	I0130 20:44:27.548255   44923 logs.go:276] 1 containers: [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c]
	I0130 20:44:27.548299   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.552806   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:44:27.552864   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:44:27.608403   44923 cri.go:89] found id: "39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:27.608434   44923 cri.go:89] found id: ""
	I0130 20:44:27.608444   44923 logs.go:276] 1 containers: [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79]
	I0130 20:44:27.608523   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.613370   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:44:27.613435   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:44:27.668380   44923 cri.go:89] found id: "880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:27.668406   44923 cri.go:89] found id: ""
	I0130 20:44:27.668417   44923 logs.go:276] 1 containers: [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689]
	I0130 20:44:27.668470   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.673171   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:44:27.673231   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:44:27.720444   44923 cri.go:89] found id: "10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:27.720473   44923 cri.go:89] found id: ""
	I0130 20:44:27.720483   44923 logs.go:276] 1 containers: [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f]
	I0130 20:44:27.720546   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.725007   44923 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:44:27.725062   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:44:27.772186   44923 cri.go:89] found id: ""
	I0130 20:44:27.772214   44923 logs.go:276] 0 containers: []
	W0130 20:44:27.772224   44923 logs.go:278] No container was found matching "kindnet"
	I0130 20:44:27.772231   44923 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:44:27.772288   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:44:27.813222   44923 cri.go:89] found id: "e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:27.813259   44923 cri.go:89] found id: "748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:27.813268   44923 cri.go:89] found id: ""
	I0130 20:44:27.813286   44923 logs.go:276] 2 containers: [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446]
	I0130 20:44:27.813347   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.817565   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:27.821737   44923 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:44:27.821759   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:44:28.299900   44923 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:44:28.299933   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:44:28.441830   44923 logs.go:123] Gathering logs for storage-provisioner [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0] ...
	I0130 20:44:28.441866   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:28.485579   44923 logs.go:123] Gathering logs for dmesg ...
	I0130 20:44:28.485611   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:44:28.500668   44923 logs.go:123] Gathering logs for kube-controller-manager [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f] ...
	I0130 20:44:28.500691   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:28.558472   44923 logs.go:123] Gathering logs for storage-provisioner [748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446] ...
	I0130 20:44:28.558502   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:28.604655   44923 logs.go:123] Gathering logs for kubelet ...
	I0130 20:44:28.604687   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:44:28.670010   44923 logs.go:123] Gathering logs for kube-proxy [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689] ...
	I0130 20:44:28.670041   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:28.712222   44923 logs.go:123] Gathering logs for coredns [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c] ...
	I0130 20:44:28.712259   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:28.764243   44923 logs.go:123] Gathering logs for kube-scheduler [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79] ...
	I0130 20:44:28.764276   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:28.801930   44923 logs.go:123] Gathering logs for container status ...
	I0130 20:44:28.801956   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:44:28.848585   44923 logs.go:123] Gathering logs for kube-apiserver [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e] ...
	I0130 20:44:28.848612   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:28.902903   44923 logs.go:123] Gathering logs for etcd [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901] ...
	I0130 20:44:28.902936   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:30.785050   45819 out.go:204]   - Booting up control plane ...
	I0130 20:44:30.785155   45819 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 20:44:30.790853   45819 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 20:44:30.798657   45819 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 20:44:30.799425   45819 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 20:44:30.801711   45819 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 20:44:30.475584   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:32.973843   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:34.974144   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:31.454103   44923 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0130 20:44:31.460009   44923 api_server.go:279] https://192.168.50.220:8443/healthz returned 200:
	ok
	I0130 20:44:31.461505   44923 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 20:44:31.461527   44923 api_server.go:131] duration metric: took 4.068214052s to wait for apiserver health ...
	I0130 20:44:31.461537   44923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:44:31.461563   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:44:31.461626   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:44:31.509850   44923 cri.go:89] found id: "ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:31.509874   44923 cri.go:89] found id: ""
	I0130 20:44:31.509884   44923 logs.go:276] 1 containers: [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e]
	I0130 20:44:31.509941   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.514078   44923 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:44:31.514136   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:44:31.555581   44923 cri.go:89] found id: "b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:31.555605   44923 cri.go:89] found id: ""
	I0130 20:44:31.555613   44923 logs.go:276] 1 containers: [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901]
	I0130 20:44:31.555674   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.559888   44923 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:44:31.559948   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:44:31.620256   44923 cri.go:89] found id: "3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:31.620285   44923 cri.go:89] found id: ""
	I0130 20:44:31.620295   44923 logs.go:276] 1 containers: [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c]
	I0130 20:44:31.620352   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.626003   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:44:31.626064   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:44:31.662862   44923 cri.go:89] found id: "39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:31.662889   44923 cri.go:89] found id: ""
	I0130 20:44:31.662899   44923 logs.go:276] 1 containers: [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79]
	I0130 20:44:31.662972   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.668242   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:44:31.668306   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:44:31.717065   44923 cri.go:89] found id: "880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:31.717089   44923 cri.go:89] found id: ""
	I0130 20:44:31.717098   44923 logs.go:276] 1 containers: [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689]
	I0130 20:44:31.717160   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.722195   44923 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:44:31.722250   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:44:31.779789   44923 cri.go:89] found id: "10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:31.779812   44923 cri.go:89] found id: ""
	I0130 20:44:31.779821   44923 logs.go:276] 1 containers: [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f]
	I0130 20:44:31.779894   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.784710   44923 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:44:31.784776   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:44:31.826045   44923 cri.go:89] found id: ""
	I0130 20:44:31.826073   44923 logs.go:276] 0 containers: []
	W0130 20:44:31.826082   44923 logs.go:278] No container was found matching "kindnet"
	I0130 20:44:31.826087   44923 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:44:31.826131   44923 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:44:31.868212   44923 cri.go:89] found id: "e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:31.868236   44923 cri.go:89] found id: "748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:31.868243   44923 cri.go:89] found id: ""
	I0130 20:44:31.868253   44923 logs.go:276] 2 containers: [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446]
	I0130 20:44:31.868314   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.873019   44923 ssh_runner.go:195] Run: which crictl
	I0130 20:44:31.877432   44923 logs.go:123] Gathering logs for storage-provisioner [e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0] ...
	I0130 20:44:31.877456   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e690d53fe9ae6dfb43427c24ed6e7a41eadde0181315a3e34c6b0a271c253ed0"
	I0130 20:44:31.915888   44923 logs.go:123] Gathering logs for storage-provisioner [748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446] ...
	I0130 20:44:31.915915   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 748483279e2b917ce446437a3d68c2c4a257bda921b5c4819a0187580e9bc446"
	I0130 20:44:31.972950   44923 logs.go:123] Gathering logs for kubelet ...
	I0130 20:44:31.972978   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:44:32.028993   44923 logs.go:123] Gathering logs for dmesg ...
	I0130 20:44:32.029028   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:44:32.046602   44923 logs.go:123] Gathering logs for etcd [b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901] ...
	I0130 20:44:32.046633   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d8d2bbf972c94b2f76045309549213ccb63d28072796fd200e5e52260e2901"
	I0130 20:44:32.094088   44923 logs.go:123] Gathering logs for kube-proxy [880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689] ...
	I0130 20:44:32.094123   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 880f1c6b663c7b282e4a61f2966fd80b9d7effdbb99eaad2a56bc55fc0384689"
	I0130 20:44:32.138616   44923 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:44:32.138645   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:44:32.526995   44923 logs.go:123] Gathering logs for container status ...
	I0130 20:44:32.527033   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:44:32.591970   44923 logs.go:123] Gathering logs for kube-apiserver [ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e] ...
	I0130 20:44:32.592003   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac5dbd0849de67bb0d5085dff8dd4cf980b5dd1774c3ff9f5dc409c29e72c63e"
	I0130 20:44:32.655438   44923 logs.go:123] Gathering logs for coredns [3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c] ...
	I0130 20:44:32.655466   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d08fb7c4f0e54898dfcead926596aaa284af48599f20d9c0a85e3d4ab10283c"
	I0130 20:44:32.707131   44923 logs.go:123] Gathering logs for kube-scheduler [39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79] ...
	I0130 20:44:32.707163   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39917caad7f3be344da0a65dc3676d833425743f5349d402067af6de93f38a79"
	I0130 20:44:32.749581   44923 logs.go:123] Gathering logs for kube-controller-manager [10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f] ...
	I0130 20:44:32.749610   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10fb0450f95eda143496d1b99c01a04ee1a6c0a0a7bb69a0d6debbc0f3324f1f"
	I0130 20:44:32.815778   44923 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:44:32.815805   44923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:44:35.448121   44923 system_pods.go:59] 8 kube-system pods found
	I0130 20:44:35.448155   44923 system_pods.go:61] "coredns-76f75df574-d4c7t" [a8701b4d-0616-4c05-9ba0-0157adae2d13] Running
	I0130 20:44:35.448162   44923 system_pods.go:61] "etcd-no-preload-473743" [ed931ab3-95d8-4115-ae97-1c274ed8432d] Running
	I0130 20:44:35.448169   44923 system_pods.go:61] "kube-apiserver-no-preload-473743" [64b9b17c-6df5-41db-a308-b0deba016c9d] Running
	I0130 20:44:35.448175   44923 system_pods.go:61] "kube-controller-manager-no-preload-473743" [a28d8dc6-244a-4dfa-9d7f-468281823332] Running
	I0130 20:44:35.448181   44923 system_pods.go:61] "kube-proxy-zklzt" [fa94d19c-b0d6-4e78-86e8-e6b5f3608753] Running
	I0130 20:44:35.448188   44923 system_pods.go:61] "kube-scheduler-no-preload-473743" [b8f8066b-8644-42c3-b47a-52e34210e410] Running
	I0130 20:44:35.448198   44923 system_pods.go:61] "metrics-server-57f55c9bc5-wzb2g" [cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:44:35.448210   44923 system_pods.go:61] "storage-provisioner" [a257b079-cb6e-45fd-b05d-9ad6fa26225e] Running
	I0130 20:44:35.448221   44923 system_pods.go:74] duration metric: took 3.986678023s to wait for pod list to return data ...
	I0130 20:44:35.448227   44923 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:44:35.451377   44923 default_sa.go:45] found service account: "default"
	I0130 20:44:35.451397   44923 default_sa.go:55] duration metric: took 3.162882ms for default service account to be created ...
	I0130 20:44:35.451404   44923 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:44:35.457941   44923 system_pods.go:86] 8 kube-system pods found
	I0130 20:44:35.457962   44923 system_pods.go:89] "coredns-76f75df574-d4c7t" [a8701b4d-0616-4c05-9ba0-0157adae2d13] Running
	I0130 20:44:35.457969   44923 system_pods.go:89] "etcd-no-preload-473743" [ed931ab3-95d8-4115-ae97-1c274ed8432d] Running
	I0130 20:44:35.457976   44923 system_pods.go:89] "kube-apiserver-no-preload-473743" [64b9b17c-6df5-41db-a308-b0deba016c9d] Running
	I0130 20:44:35.457983   44923 system_pods.go:89] "kube-controller-manager-no-preload-473743" [a28d8dc6-244a-4dfa-9d7f-468281823332] Running
	I0130 20:44:35.457992   44923 system_pods.go:89] "kube-proxy-zklzt" [fa94d19c-b0d6-4e78-86e8-e6b5f3608753] Running
	I0130 20:44:35.457999   44923 system_pods.go:89] "kube-scheduler-no-preload-473743" [b8f8066b-8644-42c3-b47a-52e34210e410] Running
	I0130 20:44:35.458013   44923 system_pods.go:89] "metrics-server-57f55c9bc5-wzb2g" [cae1a52f-dc27-41ca-a8e2-714eb1a1c8a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:44:35.458023   44923 system_pods.go:89] "storage-provisioner" [a257b079-cb6e-45fd-b05d-9ad6fa26225e] Running
	I0130 20:44:35.458032   44923 system_pods.go:126] duration metric: took 6.622973ms to wait for k8s-apps to be running ...
	I0130 20:44:35.458040   44923 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:44:35.458085   44923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:35.478158   44923 system_svc.go:56] duration metric: took 20.107762ms WaitForService to wait for kubelet.
	I0130 20:44:35.478182   44923 kubeadm.go:581] duration metric: took 4m24.515659177s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:44:35.478205   44923 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:44:35.481624   44923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:44:35.481649   44923 node_conditions.go:123] node cpu capacity is 2
	I0130 20:44:35.481661   44923 node_conditions.go:105] duration metric: took 3.450762ms to run NodePressure ...
	I0130 20:44:35.481674   44923 start.go:228] waiting for startup goroutines ...
	I0130 20:44:35.481682   44923 start.go:233] waiting for cluster config update ...
	I0130 20:44:35.481695   44923 start.go:242] writing updated cluster config ...
	I0130 20:44:35.481966   44923 ssh_runner.go:195] Run: rm -f paused
	I0130 20:44:35.534192   44923 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0130 20:44:35.537286   44923 out.go:177] * Done! kubectl is now configured to use "no-preload-473743" cluster and "default" namespace by default
	I0130 20:44:36.975176   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:39.472594   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:40.808532   45819 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.005048 seconds
	I0130 20:44:40.808703   45819 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 20:44:40.821445   45819 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 20:44:41.350196   45819 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 20:44:41.350372   45819 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-150971 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0130 20:44:41.859169   45819 kubeadm.go:322] [bootstrap-token] Using token: vlkrdr.8ubylscclgt88ll2
	I0130 20:44:41.862311   45819 out.go:204]   - Configuring RBAC rules ...
	I0130 20:44:41.862450   45819 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 20:44:41.870072   45819 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 20:44:41.874429   45819 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 20:44:41.883936   45819 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 20:44:41.887738   45819 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 20:44:41.963361   45819 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 20:44:42.299030   45819 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 20:44:42.300623   45819 kubeadm.go:322] 
	I0130 20:44:42.300708   45819 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 20:44:42.300721   45819 kubeadm.go:322] 
	I0130 20:44:42.300820   45819 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 20:44:42.300845   45819 kubeadm.go:322] 
	I0130 20:44:42.300886   45819 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 20:44:42.300975   45819 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 20:44:42.301048   45819 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 20:44:42.301061   45819 kubeadm.go:322] 
	I0130 20:44:42.301126   45819 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 20:44:42.301241   45819 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 20:44:42.301309   45819 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 20:44:42.301326   45819 kubeadm.go:322] 
	I0130 20:44:42.301417   45819 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0130 20:44:42.301482   45819 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 20:44:42.301488   45819 kubeadm.go:322] 
	I0130 20:44:42.301554   45819 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vlkrdr.8ubylscclgt88ll2 \
	I0130 20:44:42.301684   45819 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 \
	I0130 20:44:42.301717   45819 kubeadm.go:322]     --control-plane 	  
	I0130 20:44:42.301726   45819 kubeadm.go:322] 
	I0130 20:44:42.301827   45819 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 20:44:42.301844   45819 kubeadm.go:322] 
	I0130 20:44:42.301984   45819 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vlkrdr.8ubylscclgt88ll2 \
	I0130 20:44:42.302116   45819 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 
	I0130 20:44:42.302689   45819 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 20:44:42.302726   45819 cni.go:84] Creating CNI manager for ""
	I0130 20:44:42.302739   45819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:44:42.305197   45819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 20:44:42.306389   45819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 20:44:42.357619   45819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 20:44:42.381081   45819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:44:42.381189   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:42.381196   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218 minikube.k8s.io/name=old-k8s-version-150971 minikube.k8s.io/updated_at=2024_01_30T20_44_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:42.406368   45819 ops.go:34] apiserver oom_adj: -16
	I0130 20:44:42.639356   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:43.139439   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:43.640260   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:44.140080   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:44.639587   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:41.473598   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:43.474059   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:45.140354   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:45.640062   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:46.140282   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:46.639400   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:47.140308   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:47.640045   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:48.139406   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:48.640423   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:49.139702   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:49.640036   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:45.973530   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:47.974364   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:49.974551   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:50.139435   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:50.639471   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:51.140088   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:51.639444   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:52.139401   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:52.639731   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:53.140050   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:53.639411   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:54.139942   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:54.640279   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:52.473624   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:54.474924   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:55.139610   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:55.639431   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:56.140267   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:56.639444   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:57.140068   45819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:44:57.296527   45819 kubeadm.go:1088] duration metric: took 14.915402679s to wait for elevateKubeSystemPrivileges.
	I0130 20:44:57.296567   45819 kubeadm.go:406] StartCluster complete in 5m42.316503122s
	I0130 20:44:57.296588   45819 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:57.296672   45819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:44:57.298762   45819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:44:57.299005   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:44:57.299123   45819 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:44:57.299208   45819 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-150971"
	I0130 20:44:57.299220   45819 config.go:182] Loaded profile config "old-k8s-version-150971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 20:44:57.299229   45819 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-150971"
	W0130 20:44:57.299241   45819 addons.go:243] addon storage-provisioner should already be in state true
	I0130 20:44:57.299220   45819 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-150971"
	I0130 20:44:57.299300   45819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-150971"
	I0130 20:44:57.299315   45819 host.go:66] Checking if "old-k8s-version-150971" exists ...
	I0130 20:44:57.299247   45819 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-150971"
	I0130 20:44:57.299387   45819 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-150971"
	W0130 20:44:57.299397   45819 addons.go:243] addon metrics-server should already be in state true
	I0130 20:44:57.299433   45819 host.go:66] Checking if "old-k8s-version-150971" exists ...
	I0130 20:44:57.299705   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.299726   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.299756   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.299760   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.299796   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.299897   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.319159   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38823
	I0130 20:44:57.319202   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45589
	I0130 20:44:57.319167   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34823
	I0130 20:44:57.319578   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.319707   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.319771   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.320071   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.320103   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.320242   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.320261   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.320408   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.320423   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.320586   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.320630   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.321140   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.321158   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.321591   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.321624   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.321675   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.321705   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.325091   45819 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-150971"
	W0130 20:44:57.325106   45819 addons.go:243] addon default-storageclass should already be in state true
	I0130 20:44:57.325125   45819 host.go:66] Checking if "old-k8s-version-150971" exists ...
	I0130 20:44:57.325420   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.325442   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.342652   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
	I0130 20:44:57.342787   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41961
	I0130 20:44:57.343203   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.343303   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.343745   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.343779   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.343848   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44027
	I0130 20:44:57.343887   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.343903   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.344220   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.344244   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.344220   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.344493   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.344494   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.344707   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.344730   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.345083   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.346139   45819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:44:57.346172   45819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:44:57.346830   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:44:57.346891   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:44:57.348974   45819 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 20:44:57.350330   45819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:44:57.350364   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 20:44:57.351707   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 20:44:57.351729   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:44:57.351684   45819 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:57.351795   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:44:57.351821   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:44:57.356145   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.356428   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.356595   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:44:57.356621   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.356767   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:44:57.357040   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:44:57.357095   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:44:57.357123   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.357218   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:44:57.357266   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:44:57.357458   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:44:57.357451   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:44:57.357617   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:44:57.357754   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:44:57.362806   45819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I0130 20:44:57.363167   45819 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:44:57.363742   45819 main.go:141] libmachine: Using API Version  1
	I0130 20:44:57.363770   45819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:44:57.364074   45819 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:44:57.364280   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetState
	I0130 20:44:57.365877   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .DriverName
	I0130 20:44:57.366086   45819 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:57.366096   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:44:57.366107   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHHostname
	I0130 20:44:57.369237   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.369890   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHPort
	I0130 20:44:57.369930   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:fe:f8", ip: ""} in network mk-old-k8s-version-150971: {Iface:virbr1 ExpiryTime:2024-01-30 21:38:59 +0000 UTC Type:0 Mac:52:54:00:6e:fe:f8 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:old-k8s-version-150971 Clientid:01:52:54:00:6e:fe:f8}
	I0130 20:44:57.369968   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | domain old-k8s-version-150971 has defined IP address 192.168.39.16 and MAC address 52:54:00:6e:fe:f8 in network mk-old-k8s-version-150971
	I0130 20:44:57.370351   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHKeyPath
	I0130 20:44:57.370563   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .GetSSHUsername
	I0130 20:44:57.370712   45819 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/old-k8s-version-150971/id_rsa Username:docker}
	I0130 20:44:57.509329   45819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:44:57.535146   45819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:44:57.536528   45819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 20:44:57.559042   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 20:44:57.559066   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 20:44:57.643054   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 20:44:57.643081   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 20:44:57.773561   45819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:57.773588   45819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 20:44:57.848668   45819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 20:44:57.910205   45819 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-150971" context rescaled to 1 replicas
	I0130 20:44:57.910247   45819 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:44:57.912390   45819 out.go:177] * Verifying Kubernetes components...
	I0130 20:44:57.913764   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:44:58.721986   45819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.186811658s)
	I0130 20:44:58.722033   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722045   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722145   45819 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.185575635s)
	I0130 20:44:58.722210   45819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.212845439s)
	I0130 20:44:58.722213   45819 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0130 20:44:58.722254   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722271   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722347   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.722359   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.722371   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.722381   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722391   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722537   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.722576   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.722593   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.722611   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.722621   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.722659   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.722675   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.724251   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.724291   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.724304   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:58.798383   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:58.798410   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:58.798745   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:58.798767   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:58.798816   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:59.125243   45819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.276531373s)
	I0130 20:44:59.125305   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:59.125322   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:59.125256   45819 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.211465342s)
	I0130 20:44:59.125360   45819 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-150971" to be "Ready" ...
	I0130 20:44:59.125612   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:59.125639   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:59.125650   45819 main.go:141] libmachine: Making call to close driver server
	I0130 20:44:59.125650   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:59.125659   45819 main.go:141] libmachine: (old-k8s-version-150971) Calling .Close
	I0130 20:44:59.125902   45819 main.go:141] libmachine: (old-k8s-version-150971) DBG | Closing plugin on server side
	I0130 20:44:59.125953   45819 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:44:59.125963   45819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:44:59.125972   45819 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-150971"
	I0130 20:44:59.127634   45819 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 20:44:59.129415   45819 addons.go:505] enable addons completed in 1.830294624s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 20:44:59.141691   45819 node_ready.go:49] node "old-k8s-version-150971" has status "Ready":"True"
	I0130 20:44:59.141715   45819 node_ready.go:38] duration metric: took 16.331635ms waiting for node "old-k8s-version-150971" to be "Ready" ...
	I0130 20:44:59.141725   45819 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:44:59.146645   45819 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-7qhmc" in "kube-system" namespace to be "Ready" ...
	I0130 20:44:56.475086   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:44:58.973370   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:00.161718   45819 pod_ready.go:92] pod "coredns-5644d7b6d9-7qhmc" in "kube-system" namespace has status "Ready":"True"
	I0130 20:45:00.161741   45819 pod_ready.go:81] duration metric: took 1.015069343s waiting for pod "coredns-5644d7b6d9-7qhmc" in "kube-system" namespace to be "Ready" ...
	I0130 20:45:00.161754   45819 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zbdxm" in "kube-system" namespace to be "Ready" ...
	I0130 20:45:00.668280   45819 pod_ready.go:92] pod "kube-proxy-zbdxm" in "kube-system" namespace has status "Ready":"True"
	I0130 20:45:00.668313   45819 pod_ready.go:81] duration metric: took 506.550797ms waiting for pod "kube-proxy-zbdxm" in "kube-system" namespace to be "Ready" ...
	I0130 20:45:00.668328   45819 pod_ready.go:38] duration metric: took 1.526591158s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:45:00.668343   45819 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:45:00.668398   45819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:45:00.682119   45819 api_server.go:72] duration metric: took 2.771845703s to wait for apiserver process to appear ...
	I0130 20:45:00.682143   45819 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:45:00.682167   45819 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0130 20:45:00.687603   45819 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0130 20:45:00.688287   45819 api_server.go:141] control plane version: v1.16.0
	I0130 20:45:00.688302   45819 api_server.go:131] duration metric: took 6.153997ms to wait for apiserver health ...
	I0130 20:45:00.688309   45819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:45:00.691917   45819 system_pods.go:59] 4 kube-system pods found
	I0130 20:45:00.691936   45819 system_pods.go:61] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:00.691942   45819 system_pods.go:61] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:00.691948   45819 system_pods.go:61] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:00.691954   45819 system_pods.go:61] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:45:00.691962   45819 system_pods.go:74] duration metric: took 3.648521ms to wait for pod list to return data ...
	I0130 20:45:00.691970   45819 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:45:00.694229   45819 default_sa.go:45] found service account: "default"
	I0130 20:45:00.694250   45819 default_sa.go:55] duration metric: took 2.274248ms for default service account to be created ...
	I0130 20:45:00.694258   45819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:45:00.698156   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:00.698179   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:00.698187   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:00.698198   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:00.698210   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:45:00.698234   45819 retry.go:31] will retry after 277.03208ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:00.979637   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:00.979660   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:00.979665   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:00.979671   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:00.979677   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 20:45:00.979694   45819 retry.go:31] will retry after 341.469517ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:01.326631   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:01.326666   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:01.326674   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:01.326683   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:01.326689   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:01.326713   45819 retry.go:31] will retry after 487.104661ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:01.818702   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:01.818733   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:01.818742   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:01.818752   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:01.818759   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:01.818779   45819 retry.go:31] will retry after 574.423042ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:02.398901   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:02.398940   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:02.398949   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:02.398959   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:02.398966   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:02.398986   45819 retry.go:31] will retry after 741.538469ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:03.145137   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:03.145162   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:03.145168   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:03.145174   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:03.145179   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:03.145194   45819 retry.go:31] will retry after 742.915086ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:03.892722   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:03.892748   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:03.892753   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:03.892759   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:03.892764   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:03.892779   45819 retry.go:31] will retry after 786.727719ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:01.473056   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:03.473346   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:04.685933   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:04.685967   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:04.685976   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:04.685985   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:04.685993   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:04.686016   45819 retry.go:31] will retry after 1.232157955s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:05.923020   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:05.923045   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:05.923050   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:05.923056   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:05.923061   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:05.923076   45819 retry.go:31] will retry after 1.652424416s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:07.580982   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:07.581007   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:07.581013   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:07.581019   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:07.581026   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:07.581042   45819 retry.go:31] will retry after 1.774276151s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:09.360073   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:09.360098   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:09.360103   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:09.360110   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:09.360115   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:09.360133   45819 retry.go:31] will retry after 2.786181653s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:05.975152   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:07.975274   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:12.151191   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:12.151215   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:12.151221   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:12.151227   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:12.151232   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:12.151258   45819 retry.go:31] will retry after 3.456504284s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:10.472793   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:12.474310   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:14.977715   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:15.613679   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:15.613705   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:15.613711   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:15.613718   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:15.613722   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:15.613741   45819 retry.go:31] will retry after 4.434906632s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:17.472993   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:19.473530   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:20.053023   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:20.053050   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:20.053055   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:20.053062   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:20.053066   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:20.053082   45819 retry.go:31] will retry after 3.910644554s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:23.969998   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:23.970027   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:23.970035   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:23.970047   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:23.970053   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:23.970075   45819 retry.go:31] will retry after 4.907431581s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:21.473946   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:23.973965   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:28.881886   45819 system_pods.go:86] 4 kube-system pods found
	I0130 20:45:28.881911   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:28.881917   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:28.881924   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:28.881929   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:28.881956   45819 retry.go:31] will retry after 7.594967181s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:26.473519   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:28.474676   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:30.972445   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:32.973156   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:34.973590   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:36.482226   45819 system_pods.go:86] 5 kube-system pods found
	I0130 20:45:36.482255   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:36.482261   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:36.482267   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Pending
	I0130 20:45:36.482277   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:36.482284   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:36.482306   45819 retry.go:31] will retry after 8.875079493s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0130 20:45:36.974189   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:39.474803   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:41.973709   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:43.974130   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:45.361733   45819 system_pods.go:86] 5 kube-system pods found
	I0130 20:45:45.361760   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:45.361766   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:45.361772   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:45:45.361781   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:45.361789   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:45.361820   45819 retry.go:31] will retry after 9.918306048s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0130 20:45:45.976853   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:48.476619   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:50.974748   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:52.975900   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:55.285765   45819 system_pods.go:86] 6 kube-system pods found
	I0130 20:45:55.285793   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:45:55.285801   45819 system_pods.go:89] "kube-apiserver-old-k8s-version-150971" [14975616-ba41-4199-b0e3-179dc01def2d] Pending
	I0130 20:45:55.285807   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:45:55.285813   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:45:55.285822   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:45:55.285828   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:45:55.285849   45819 retry.go:31] will retry after 12.684125727s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0130 20:45:55.473705   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:57.973533   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:45:59.974108   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:02.473825   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:04.973953   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:07.975898   45819 system_pods.go:86] 8 kube-system pods found
	I0130 20:46:07.975923   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:46:07.975929   45819 system_pods.go:89] "etcd-old-k8s-version-150971" [21884345-e587-4bae-88b9-78e0bdacf954] Running
	I0130 20:46:07.975933   45819 system_pods.go:89] "kube-apiserver-old-k8s-version-150971" [14975616-ba41-4199-b0e3-179dc01def2d] Running
	I0130 20:46:07.975937   45819 system_pods.go:89] "kube-controller-manager-old-k8s-version-150971" [f0cfbd77-f00e-4d40-a301-f24f6ed937e1] Pending
	I0130 20:46:07.975941   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:46:07.975944   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:46:07.975951   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:46:07.975955   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:46:07.975969   45819 retry.go:31] will retry after 15.59894457s: missing components: kube-controller-manager
	I0130 20:46:07.472712   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:09.474175   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:11.478228   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:13.973190   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:16.473264   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:18.474418   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:23.581862   45819 system_pods.go:86] 8 kube-system pods found
	I0130 20:46:23.581890   45819 system_pods.go:89] "coredns-5644d7b6d9-7qhmc" [03050fc6-39c5-45fa-8fc0-fd41a78392f1] Running
	I0130 20:46:23.581895   45819 system_pods.go:89] "etcd-old-k8s-version-150971" [21884345-e587-4bae-88b9-78e0bdacf954] Running
	I0130 20:46:23.581899   45819 system_pods.go:89] "kube-apiserver-old-k8s-version-150971" [14975616-ba41-4199-b0e3-179dc01def2d] Running
	I0130 20:46:23.581904   45819 system_pods.go:89] "kube-controller-manager-old-k8s-version-150971" [f0cfbd77-f00e-4d40-a301-f24f6ed937e1] Running
	I0130 20:46:23.581907   45819 system_pods.go:89] "kube-proxy-zbdxm" [82328394-34a6-476e-994b-8469c1cd370f] Running
	I0130 20:46:23.581911   45819 system_pods.go:89] "kube-scheduler-old-k8s-version-150971" [46ff540f-ce09-4f61-b9e4-3cc641f4e2b7] Running
	I0130 20:46:23.581918   45819 system_pods.go:89] "metrics-server-74d5856cc6-22948" [89eebcb3-0362-49b7-8074-6060ed865fc7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:46:23.581923   45819 system_pods.go:89] "storage-provisioner" [ff3eddf0-39e5-415f-b6e1-2b9324ae67f5] Running
	I0130 20:46:23.581932   45819 system_pods.go:126] duration metric: took 1m22.887668504s to wait for k8s-apps to be running ...
	I0130 20:46:23.581939   45819 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:46:23.581986   45819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:46:23.604099   45819 system_svc.go:56] duration metric: took 22.14886ms WaitForService to wait for kubelet.
	I0130 20:46:23.604134   45819 kubeadm.go:581] duration metric: took 1m25.693865663s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:46:23.604159   45819 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:46:23.607539   45819 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:46:23.607567   45819 node_conditions.go:123] node cpu capacity is 2
	I0130 20:46:23.607580   45819 node_conditions.go:105] duration metric: took 3.415829ms to run NodePressure ...
	I0130 20:46:23.607594   45819 start.go:228] waiting for startup goroutines ...
	I0130 20:46:23.607602   45819 start.go:233] waiting for cluster config update ...
	I0130 20:46:23.607615   45819 start.go:242] writing updated cluster config ...
	I0130 20:46:23.607933   45819 ssh_runner.go:195] Run: rm -f paused
	I0130 20:46:23.658357   45819 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0130 20:46:23.660375   45819 out.go:177] 
	W0130 20:46:23.661789   45819 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0130 20:46:23.663112   45819 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0130 20:46:23.664623   45819 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-150971" cluster and "default" namespace by default
	I0130 20:46:20.474791   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:22.973143   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:24.974320   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:27.474508   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:29.973471   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:31.973727   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:33.974180   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:36.472928   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:38.474336   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:40.973509   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:42.973942   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:45.473120   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:47.972943   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:49.973756   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:51.973913   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:54.472597   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:56.473076   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:46:58.974262   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:01.476906   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:03.974275   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:06.474453   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:08.973144   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:10.973407   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:12.974842   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:15.473765   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:17.474938   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:19.973849   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:21.974660   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:23.977144   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:26.479595   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:28.975572   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:31.473715   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:33.974243   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:36.472321   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:38.473133   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:40.973786   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:43.473691   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:45.476882   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:47.975923   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:50.474045   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:52.474411   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:54.474531   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:56.973542   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:47:58.974226   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:00.975045   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:03.473440   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:05.473667   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:07.973417   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:09.978199   45441 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace has status "Ready":"False"
	I0130 20:48:09.978230   45441 pod_ready.go:81] duration metric: took 4m0.012361166s waiting for pod "metrics-server-57f55c9bc5-xjc2m" in "kube-system" namespace to be "Ready" ...
	E0130 20:48:09.978243   45441 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 20:48:09.978253   45441 pod_ready.go:38] duration metric: took 4m1.998529694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:48:09.978276   45441 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:48:09.978323   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:48:09.978403   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:48:10.038921   45441 cri.go:89] found id: "39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:10.038949   45441 cri.go:89] found id: ""
	I0130 20:48:10.038958   45441 logs.go:276] 1 containers: [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481]
	I0130 20:48:10.039017   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.043851   45441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:48:10.043902   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:48:10.088920   45441 cri.go:89] found id: "1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:10.088945   45441 cri.go:89] found id: ""
	I0130 20:48:10.088952   45441 logs.go:276] 1 containers: [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15]
	I0130 20:48:10.089001   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.094186   45441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:48:10.094267   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:48:10.143350   45441 cri.go:89] found id: "215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:10.143380   45441 cri.go:89] found id: ""
	I0130 20:48:10.143390   45441 logs.go:276] 1 containers: [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb]
	I0130 20:48:10.143450   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.148357   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:48:10.148426   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:48:10.187812   45441 cri.go:89] found id: "8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:10.187848   45441 cri.go:89] found id: ""
	I0130 20:48:10.187858   45441 logs.go:276] 1 containers: [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7]
	I0130 20:48:10.187914   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.192049   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:48:10.192109   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:48:10.241052   45441 cri.go:89] found id: "c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:10.241079   45441 cri.go:89] found id: ""
	I0130 20:48:10.241088   45441 logs.go:276] 1 containers: [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe]
	I0130 20:48:10.241139   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.245711   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:48:10.245763   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:48:10.287115   45441 cri.go:89] found id: "1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:10.287139   45441 cri.go:89] found id: ""
	I0130 20:48:10.287148   45441 logs.go:276] 1 containers: [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed]
	I0130 20:48:10.287194   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.291627   45441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:48:10.291697   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:48:10.341321   45441 cri.go:89] found id: ""
	I0130 20:48:10.341346   45441 logs.go:276] 0 containers: []
	W0130 20:48:10.341356   45441 logs.go:278] No container was found matching "kindnet"
	I0130 20:48:10.341362   45441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:48:10.341420   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:48:10.385515   45441 cri.go:89] found id: "f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:10.385543   45441 cri.go:89] found id: ""
	I0130 20:48:10.385552   45441 logs.go:276] 1 containers: [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06]
	I0130 20:48:10.385601   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:10.390397   45441 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:48:10.390433   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:48:10.832689   45441 logs.go:123] Gathering logs for dmesg ...
	I0130 20:48:10.832724   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:48:10.846560   45441 logs.go:123] Gathering logs for storage-provisioner [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06] ...
	I0130 20:48:10.846587   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:10.887801   45441 logs.go:123] Gathering logs for kube-apiserver [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481] ...
	I0130 20:48:10.887826   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:10.942977   45441 logs.go:123] Gathering logs for etcd [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15] ...
	I0130 20:48:10.943003   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:10.987642   45441 logs.go:123] Gathering logs for coredns [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb] ...
	I0130 20:48:10.987669   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:11.024934   45441 logs.go:123] Gathering logs for kube-scheduler [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7] ...
	I0130 20:48:11.024964   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:11.076336   45441 logs.go:123] Gathering logs for kube-proxy [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe] ...
	I0130 20:48:11.076373   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:11.127315   45441 logs.go:123] Gathering logs for kube-controller-manager [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed] ...
	I0130 20:48:11.127344   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:11.182944   45441 logs.go:123] Gathering logs for kubelet ...
	I0130 20:48:11.182974   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:48:11.276494   45441 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:48:11.276525   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:48:11.413186   45441 logs.go:123] Gathering logs for container status ...
	I0130 20:48:11.413213   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:48:13.960537   45441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:48:13.977332   45441 api_server.go:72] duration metric: took 4m8.11544723s to wait for apiserver process to appear ...
	I0130 20:48:13.977362   45441 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:48:13.977400   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:48:13.977466   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:48:14.025510   45441 cri.go:89] found id: "39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:14.025534   45441 cri.go:89] found id: ""
	I0130 20:48:14.025542   45441 logs.go:276] 1 containers: [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481]
	I0130 20:48:14.025593   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.030025   45441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:48:14.030103   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:48:14.070504   45441 cri.go:89] found id: "1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:14.070524   45441 cri.go:89] found id: ""
	I0130 20:48:14.070531   45441 logs.go:276] 1 containers: [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15]
	I0130 20:48:14.070577   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.074858   45441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:48:14.074928   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:48:14.110816   45441 cri.go:89] found id: "215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:14.110844   45441 cri.go:89] found id: ""
	I0130 20:48:14.110853   45441 logs.go:276] 1 containers: [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb]
	I0130 20:48:14.110912   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.114997   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:48:14.115079   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:48:14.169213   45441 cri.go:89] found id: "8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:14.169240   45441 cri.go:89] found id: ""
	I0130 20:48:14.169249   45441 logs.go:276] 1 containers: [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7]
	I0130 20:48:14.169305   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.173541   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:48:14.173607   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:48:14.210634   45441 cri.go:89] found id: "c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:14.210657   45441 cri.go:89] found id: ""
	I0130 20:48:14.210664   45441 logs.go:276] 1 containers: [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe]
	I0130 20:48:14.210717   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.215015   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:48:14.215074   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:48:14.258454   45441 cri.go:89] found id: "1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:14.258477   45441 cri.go:89] found id: ""
	I0130 20:48:14.258484   45441 logs.go:276] 1 containers: [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed]
	I0130 20:48:14.258532   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.262486   45441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:48:14.262537   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:48:14.302175   45441 cri.go:89] found id: ""
	I0130 20:48:14.302205   45441 logs.go:276] 0 containers: []
	W0130 20:48:14.302213   45441 logs.go:278] No container was found matching "kindnet"
	I0130 20:48:14.302218   45441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:48:14.302262   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:48:14.339497   45441 cri.go:89] found id: "f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:14.339523   45441 cri.go:89] found id: ""
	I0130 20:48:14.339533   45441 logs.go:276] 1 containers: [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06]
	I0130 20:48:14.339589   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:14.343954   45441 logs.go:123] Gathering logs for kube-apiserver [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481] ...
	I0130 20:48:14.343983   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:14.391168   45441 logs.go:123] Gathering logs for coredns [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb] ...
	I0130 20:48:14.391203   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:14.436713   45441 logs.go:123] Gathering logs for kube-proxy [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe] ...
	I0130 20:48:14.436743   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:14.473899   45441 logs.go:123] Gathering logs for kube-controller-manager [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed] ...
	I0130 20:48:14.473934   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:14.533733   45441 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:48:14.533763   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:48:14.924087   45441 logs.go:123] Gathering logs for container status ...
	I0130 20:48:14.924121   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:48:14.972652   45441 logs.go:123] Gathering logs for kubelet ...
	I0130 20:48:14.972684   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:48:15.074398   45441 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:48:15.074443   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:48:15.206993   45441 logs.go:123] Gathering logs for kube-scheduler [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7] ...
	I0130 20:48:15.207026   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:15.258807   45441 logs.go:123] Gathering logs for storage-provisioner [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06] ...
	I0130 20:48:15.258841   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:15.299162   45441 logs.go:123] Gathering logs for dmesg ...
	I0130 20:48:15.299209   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:48:15.315611   45441 logs.go:123] Gathering logs for etcd [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15] ...
	I0130 20:48:15.315643   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:17.859914   45441 api_server.go:253] Checking apiserver healthz at https://192.168.72.52:8444/healthz ...
	I0130 20:48:17.865483   45441 api_server.go:279] https://192.168.72.52:8444/healthz returned 200:
	ok
	I0130 20:48:17.866876   45441 api_server.go:141] control plane version: v1.28.4
	I0130 20:48:17.866899   45441 api_server.go:131] duration metric: took 3.889528289s to wait for apiserver health ...
	I0130 20:48:17.866910   45441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:48:17.866937   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 20:48:17.866992   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 20:48:17.907357   45441 cri.go:89] found id: "39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:17.907386   45441 cri.go:89] found id: ""
	I0130 20:48:17.907396   45441 logs.go:276] 1 containers: [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481]
	I0130 20:48:17.907461   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:17.911558   45441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 20:48:17.911617   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 20:48:17.948725   45441 cri.go:89] found id: "1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:17.948747   45441 cri.go:89] found id: ""
	I0130 20:48:17.948757   45441 logs.go:276] 1 containers: [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15]
	I0130 20:48:17.948819   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:17.953304   45441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 20:48:17.953365   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 20:48:17.994059   45441 cri.go:89] found id: "215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:17.994091   45441 cri.go:89] found id: ""
	I0130 20:48:17.994101   45441 logs.go:276] 1 containers: [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb]
	I0130 20:48:17.994158   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:17.998347   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 20:48:17.998402   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 20:48:18.047814   45441 cri.go:89] found id: "8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:18.047842   45441 cri.go:89] found id: ""
	I0130 20:48:18.047853   45441 logs.go:276] 1 containers: [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7]
	I0130 20:48:18.047914   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.052864   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 20:48:18.052927   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 20:48:18.091597   45441 cri.go:89] found id: "c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:18.091617   45441 cri.go:89] found id: ""
	I0130 20:48:18.091625   45441 logs.go:276] 1 containers: [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe]
	I0130 20:48:18.091680   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.095921   45441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 20:48:18.096034   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 20:48:18.146922   45441 cri.go:89] found id: "1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:18.146942   45441 cri.go:89] found id: ""
	I0130 20:48:18.146952   45441 logs.go:276] 1 containers: [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed]
	I0130 20:48:18.147002   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.156610   45441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 20:48:18.156671   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 20:48:18.209680   45441 cri.go:89] found id: ""
	I0130 20:48:18.209701   45441 logs.go:276] 0 containers: []
	W0130 20:48:18.209711   45441 logs.go:278] No container was found matching "kindnet"
	I0130 20:48:18.209716   45441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 20:48:18.209761   45441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 20:48:18.253810   45441 cri.go:89] found id: "f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:18.253834   45441 cri.go:89] found id: ""
	I0130 20:48:18.253841   45441 logs.go:276] 1 containers: [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06]
	I0130 20:48:18.253883   45441 ssh_runner.go:195] Run: which crictl
	I0130 20:48:18.258404   45441 logs.go:123] Gathering logs for storage-provisioner [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06] ...
	I0130 20:48:18.258433   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06"
	I0130 20:48:18.305088   45441 logs.go:123] Gathering logs for CRI-O ...
	I0130 20:48:18.305117   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 20:48:18.629911   45441 logs.go:123] Gathering logs for container status ...
	I0130 20:48:18.629948   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 20:48:18.677758   45441 logs.go:123] Gathering logs for kubelet ...
	I0130 20:48:18.677787   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0130 20:48:18.779831   45441 logs.go:123] Gathering logs for dmesg ...
	I0130 20:48:18.779869   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 20:48:18.795995   45441 logs.go:123] Gathering logs for kube-apiserver [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481] ...
	I0130 20:48:18.796024   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481"
	I0130 20:48:18.844003   45441 logs.go:123] Gathering logs for coredns [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb] ...
	I0130 20:48:18.844034   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb"
	I0130 20:48:18.884617   45441 logs.go:123] Gathering logs for kube-scheduler [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7] ...
	I0130 20:48:18.884645   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7"
	I0130 20:48:18.931556   45441 logs.go:123] Gathering logs for describe nodes ...
	I0130 20:48:18.931591   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 20:48:19.066569   45441 logs.go:123] Gathering logs for etcd [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15] ...
	I0130 20:48:19.066606   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15"
	I0130 20:48:19.115012   45441 logs.go:123] Gathering logs for kube-proxy [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe] ...
	I0130 20:48:19.115041   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe"
	I0130 20:48:19.169107   45441 logs.go:123] Gathering logs for kube-controller-manager [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed] ...
	I0130 20:48:19.169137   45441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed"
	I0130 20:48:21.731792   45441 system_pods.go:59] 8 kube-system pods found
	I0130 20:48:21.731816   45441 system_pods.go:61] "coredns-5dd5756b68-tlb8h" [547c1fe4-3ef7-421a-b460-660a05caa2ab] Running
	I0130 20:48:21.731821   45441 system_pods.go:61] "etcd-default-k8s-diff-port-877742" [a8ff44ad-5fec-415b-a574-75bce55acf8e] Running
	I0130 20:48:21.731826   45441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-877742" [b183118a-5376-412c-a991-eaebf0e6a46e] Running
	I0130 20:48:21.731830   45441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-877742" [cd5170b0-7d1c-45fd-9670-376d04e7016b] Running
	I0130 20:48:21.731834   45441 system_pods.go:61] "kube-proxy-59zvd" [ca6ef754-0898-4e1d-9ff2-9f42f456db6c] Running
	I0130 20:48:21.731838   45441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-877742" [5870d68e-b7af-408b-9484-a7e414bbe7f7] Running
	I0130 20:48:21.731845   45441 system_pods.go:61] "metrics-server-57f55c9bc5-xjc2m" [7b9a273b-d328-4ae8-925e-5bb305cfe574] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:48:21.731853   45441 system_pods.go:61] "storage-provisioner" [db1a28e4-0c45-496e-a566-32a402b0841d] Running
	I0130 20:48:21.731862   45441 system_pods.go:74] duration metric: took 3.864945598s to wait for pod list to return data ...
	I0130 20:48:21.731878   45441 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:48:21.734586   45441 default_sa.go:45] found service account: "default"
	I0130 20:48:21.734604   45441 default_sa.go:55] duration metric: took 2.721611ms for default service account to be created ...
	I0130 20:48:21.734611   45441 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:48:21.740794   45441 system_pods.go:86] 8 kube-system pods found
	I0130 20:48:21.740817   45441 system_pods.go:89] "coredns-5dd5756b68-tlb8h" [547c1fe4-3ef7-421a-b460-660a05caa2ab] Running
	I0130 20:48:21.740822   45441 system_pods.go:89] "etcd-default-k8s-diff-port-877742" [a8ff44ad-5fec-415b-a574-75bce55acf8e] Running
	I0130 20:48:21.740827   45441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-877742" [b183118a-5376-412c-a991-eaebf0e6a46e] Running
	I0130 20:48:21.740831   45441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-877742" [cd5170b0-7d1c-45fd-9670-376d04e7016b] Running
	I0130 20:48:21.740835   45441 system_pods.go:89] "kube-proxy-59zvd" [ca6ef754-0898-4e1d-9ff2-9f42f456db6c] Running
	I0130 20:48:21.740840   45441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-877742" [5870d68e-b7af-408b-9484-a7e414bbe7f7] Running
	I0130 20:48:21.740846   45441 system_pods.go:89] "metrics-server-57f55c9bc5-xjc2m" [7b9a273b-d328-4ae8-925e-5bb305cfe574] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 20:48:21.740853   45441 system_pods.go:89] "storage-provisioner" [db1a28e4-0c45-496e-a566-32a402b0841d] Running
	I0130 20:48:21.740860   45441 system_pods.go:126] duration metric: took 6.244006ms to wait for k8s-apps to be running ...
	I0130 20:48:21.740867   45441 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:48:21.740906   45441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:48:21.756380   45441 system_svc.go:56] duration metric: took 15.505755ms WaitForService to wait for kubelet.
	I0130 20:48:21.756405   45441 kubeadm.go:581] duration metric: took 4m15.894523943s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:48:21.756429   45441 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:48:21.759579   45441 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:48:21.759605   45441 node_conditions.go:123] node cpu capacity is 2
	I0130 20:48:21.759616   45441 node_conditions.go:105] duration metric: took 3.182491ms to run NodePressure ...
	I0130 20:48:21.759626   45441 start.go:228] waiting for startup goroutines ...
	I0130 20:48:21.759632   45441 start.go:233] waiting for cluster config update ...
	I0130 20:48:21.759642   45441 start.go:242] writing updated cluster config ...
	I0130 20:48:21.759879   45441 ssh_runner.go:195] Run: rm -f paused
	I0130 20:48:21.808471   45441 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 20:48:21.810628   45441 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-877742" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 20:38:58 UTC, ends at Tue 2024-01-30 20:57:43 UTC. --
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.708052195Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=99f73948-9190-4a4e-9c33-6472a3f36f16 name=/runtime.v1.RuntimeService/Version
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.709964529Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8edbe700-66bb-4203-a76c-5c837369deab name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.710923568Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648263710902891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8edbe700-66bb-4203-a76c-5c837369deab name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.711707999Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1a1d7821-bd15-4bc5-8e18-994f136f8094 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.711803107Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1a1d7821-bd15-4bc5-8e18-994f136f8094 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.712091649Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a18f05c5071cc3d5c2acc3d8ef16ae221090aa663bbb0034595edf8fe754d1c7,PodSandboxId:0bcb2ebe732effbbd1782098056d09e0461377b9f8392a575dda7e19f974b3dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647499873830054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff3eddf0-39e5-415f-b6e1-2b9324ae67f5,},Annotations:map[string]string{io.kubernetes.container.hash: cd7a3f83,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9caea105ac6df617e081cecc81dd4ba65c9a637c619374ac763237d862d8af98,PodSandboxId:8550dcc0516f94147ebec61a7ca74ca214e4a3dc445116f9066b2ea9d06ffced,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706647499516969258,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbdxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82328394-34a6-476e-994b-8469c1cd370f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c130f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15595b34a557919456566f890b263d1b2ec3d14ab43bd764942f7538fafd743c,PodSandboxId:b4e9153ebe1b813f286c161fe8b9bd7c104eb49b5d427340a0a522c12804d9e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706647498994371878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7qhmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03050fc6-39c5-45fa-8fc0-fd41a78392f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2481486e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e776ff23c68223bb4ff5b75f74ef36335945d9c5b39fba0cce4470586ca7211,PodSandboxId:5322c11b7400ef45d2bbbb8d63823a8744887bbbc6e1e06baa306bb090531f3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706647473520200673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8546a5b70f7d75b0ec40caabe5c78413,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d0a5824a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5edef1c3bea3d540bb83c092f2f450ba4092659166aa785addc4288c2d06e516,PodSandboxId:19e908cc9cdeb9d5ff9aa1c1288dcd3ef2cff34c33d908eed5a808195f231353,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706647472051324526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3acd35d56c0f1b6dea95347cb06d05926b34dca9b2d90a1a9e23e04ea99abd48,PodSandboxId:a0fb4944a8a53f907b174cea6e6754b33d5471e164eeab49382797657dbfd6a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706647471871528939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f169069135e0a2b8eb5fc8f9181,},Annotations:map[string]string{io.kubern
etes.container.hash: e97c633e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44810126fec013f866d98b5fbc8a24ce47c290be31393e33afe9433d5b72f51,PodSandboxId:4df7291754b09b207030069a01d7d857e192de5dcaa8ab49665a205db74eba90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706647471736787731,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1a1d7821-bd15-4bc5-8e18-994f136f8094 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.748923695Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=a2d3c472-e42d-4b90-a959-5cf54c36d51c name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.749186848Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7c3957aa13546b40df3f4ad1b4ce6b3f9e0a5a8cb75852443b514b9bfcdc3641,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-22948,Uid:89eebcb3-0362-49b7-8074-6060ed865fc7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647499910800144,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-22948,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89eebcb3-0362-49b7-8074-6060ed865fc7,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T20:44:59.54397065Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0bcb2ebe732effbbd1782098056d09e0461377b9f8392a575dda7e19f974b3dd,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ff3eddf0-39e5-415f-b6e1-2b9324ae67f
5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647499104828493,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff3eddf0-39e5-415f-b6e1-2b9324ae67f5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"
volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-30T20:44:58.760639425Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b4e9153ebe1b813f286c161fe8b9bd7c104eb49b5d427340a0a522c12804d9e9,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-7qhmc,Uid:03050fc6-39c5-45fa-8fc0-fd41a78392f1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647498692533397,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-7qhmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03050fc6-39c5-45fa-8fc0-fd41a78392f1,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T20:44:58.35422992Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8550dcc0516f94147ebec61a7ca74ca214e4a3dc445116f9066b2ea9d06ffced,Metadata:&PodSandboxMetadata{Name:kube-proxy-zbdxm,Uid:82328394-34a6-476e-994b-
8469c1cd370f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647498428932889,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zbdxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82328394-34a6-476e-994b-8469c1cd370f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T20:44:57.174609205Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4df7291754b09b207030069a01d7d857e192de5dcaa8ab49665a205db74eba90,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-150971,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647471199800603,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2024-01-30T20:44:30.804283459Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5322c11b7400ef45d2bbbb8d63823a8744887bbbc6e1e06baa306bb090531f3c,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-150971,Uid:8546a5b70f7d75b0ec40caabe5c78413,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647471190172489,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8546a5b70f7d75b0ec40caabe5c78413,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8546a5b70f7d75b0ec40caabe5c78413,kubernetes.io/config.seen: 2024-01-30T20:44:30.807159804Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:19e908cc9cdeb9d5ff9aa1c1288dcd
3ef2cff34c33d908eed5a808195f231353,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-150971,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647471166578797,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2024-01-30T20:44:30.805806497Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a0fb4944a8a53f907b174cea6e6754b33d5471e164eeab49382797657dbfd6a6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-150971,Uid:f9725f169069135e0a2b8eb5fc8f9181,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706647471152624558,Labels:map[string]string{component: kube-apiserver,io.k
ubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f169069135e0a2b8eb5fc8f9181,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f9725f169069135e0a2b8eb5fc8f9181,kubernetes.io/config.seen: 2024-01-30T20:44:30.794129981Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=a2d3c472-e42d-4b90-a959-5cf54c36d51c name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.750264709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4fc7422e-5bc8-49ff-ad20-424b2cbf7ca2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.750386358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4fc7422e-5bc8-49ff-ad20-424b2cbf7ca2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.750670922Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a18f05c5071cc3d5c2acc3d8ef16ae221090aa663bbb0034595edf8fe754d1c7,PodSandboxId:0bcb2ebe732effbbd1782098056d09e0461377b9f8392a575dda7e19f974b3dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647499873830054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff3eddf0-39e5-415f-b6e1-2b9324ae67f5,},Annotations:map[string]string{io.kubernetes.container.hash: cd7a3f83,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9caea105ac6df617e081cecc81dd4ba65c9a637c619374ac763237d862d8af98,PodSandboxId:8550dcc0516f94147ebec61a7ca74ca214e4a3dc445116f9066b2ea9d06ffced,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706647499516969258,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbdxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82328394-34a6-476e-994b-8469c1cd370f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c130f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15595b34a557919456566f890b263d1b2ec3d14ab43bd764942f7538fafd743c,PodSandboxId:b4e9153ebe1b813f286c161fe8b9bd7c104eb49b5d427340a0a522c12804d9e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706647498994371878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7qhmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03050fc6-39c5-45fa-8fc0-fd41a78392f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2481486e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e776ff23c68223bb4ff5b75f74ef36335945d9c5b39fba0cce4470586ca7211,PodSandboxId:5322c11b7400ef45d2bbbb8d63823a8744887bbbc6e1e06baa306bb090531f3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706647473520200673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8546a5b70f7d75b0ec40caabe5c78413,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d0a5824a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5edef1c3bea3d540bb83c092f2f450ba4092659166aa785addc4288c2d06e516,PodSandboxId:19e908cc9cdeb9d5ff9aa1c1288dcd3ef2cff34c33d908eed5a808195f231353,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706647472051324526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3acd35d56c0f1b6dea95347cb06d05926b34dca9b2d90a1a9e23e04ea99abd48,PodSandboxId:a0fb4944a8a53f907b174cea6e6754b33d5471e164eeab49382797657dbfd6a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706647471871528939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f169069135e0a2b8eb5fc8f9181,},Annotations:map[string]string{io.kubern
etes.container.hash: e97c633e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44810126fec013f866d98b5fbc8a24ce47c290be31393e33afe9433d5b72f51,PodSandboxId:4df7291754b09b207030069a01d7d857e192de5dcaa8ab49665a205db74eba90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706647471736787731,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4fc7422e-5bc8-49ff-ad20-424b2cbf7ca2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.761679121Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=670cbc72-65d0-4626-8b92-da5785224baa name=/runtime.v1.RuntimeService/Version
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.761773231Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=670cbc72-65d0-4626-8b92-da5785224baa name=/runtime.v1.RuntimeService/Version
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.763767895Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=bba6a3bc-b11b-4a6f-9e05-2c0288b01b6a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.764912880Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648263764884320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=bba6a3bc-b11b-4a6f-9e05-2c0288b01b6a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.770961917Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b35a74f2-d562-40ce-8a22-115e12af2167 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.771073154Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b35a74f2-d562-40ce-8a22-115e12af2167 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.771384770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a18f05c5071cc3d5c2acc3d8ef16ae221090aa663bbb0034595edf8fe754d1c7,PodSandboxId:0bcb2ebe732effbbd1782098056d09e0461377b9f8392a575dda7e19f974b3dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647499873830054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff3eddf0-39e5-415f-b6e1-2b9324ae67f5,},Annotations:map[string]string{io.kubernetes.container.hash: cd7a3f83,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9caea105ac6df617e081cecc81dd4ba65c9a637c619374ac763237d862d8af98,PodSandboxId:8550dcc0516f94147ebec61a7ca74ca214e4a3dc445116f9066b2ea9d06ffced,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706647499516969258,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbdxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82328394-34a6-476e-994b-8469c1cd370f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c130f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15595b34a557919456566f890b263d1b2ec3d14ab43bd764942f7538fafd743c,PodSandboxId:b4e9153ebe1b813f286c161fe8b9bd7c104eb49b5d427340a0a522c12804d9e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706647498994371878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7qhmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03050fc6-39c5-45fa-8fc0-fd41a78392f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2481486e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e776ff23c68223bb4ff5b75f74ef36335945d9c5b39fba0cce4470586ca7211,PodSandboxId:5322c11b7400ef45d2bbbb8d63823a8744887bbbc6e1e06baa306bb090531f3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706647473520200673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8546a5b70f7d75b0ec40caabe5c78413,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d0a5824a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5edef1c3bea3d540bb83c092f2f450ba4092659166aa785addc4288c2d06e516,PodSandboxId:19e908cc9cdeb9d5ff9aa1c1288dcd3ef2cff34c33d908eed5a808195f231353,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706647472051324526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3acd35d56c0f1b6dea95347cb06d05926b34dca9b2d90a1a9e23e04ea99abd48,PodSandboxId:a0fb4944a8a53f907b174cea6e6754b33d5471e164eeab49382797657dbfd6a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706647471871528939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f169069135e0a2b8eb5fc8f9181,},Annotations:map[string]string{io.kubern
etes.container.hash: e97c633e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44810126fec013f866d98b5fbc8a24ce47c290be31393e33afe9433d5b72f51,PodSandboxId:4df7291754b09b207030069a01d7d857e192de5dcaa8ab49665a205db74eba90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706647471736787731,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b35a74f2-d562-40ce-8a22-115e12af2167 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.813760059Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=53698935-8596-44e4-999d-0f7f9a2b1c3b name=/runtime.v1.RuntimeService/Version
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.813839455Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=53698935-8596-44e4-999d-0f7f9a2b1c3b name=/runtime.v1.RuntimeService/Version
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.815586696Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4a5c559b-3df7-479f-8455-f7f10547b4c4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.816178327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648263816150886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4a5c559b-3df7-479f-8455-f7f10547b4c4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.816857170Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e8642aac-0781-4fd4-8885-4c900e31abc4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.816920589Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e8642aac-0781-4fd4-8885-4c900e31abc4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 20:57:43 old-k8s-version-150971 crio[730]: time="2024-01-30 20:57:43.817135618Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a18f05c5071cc3d5c2acc3d8ef16ae221090aa663bbb0034595edf8fe754d1c7,PodSandboxId:0bcb2ebe732effbbd1782098056d09e0461377b9f8392a575dda7e19f974b3dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647499873830054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff3eddf0-39e5-415f-b6e1-2b9324ae67f5,},Annotations:map[string]string{io.kubernetes.container.hash: cd7a3f83,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9caea105ac6df617e081cecc81dd4ba65c9a637c619374ac763237d862d8af98,PodSandboxId:8550dcc0516f94147ebec61a7ca74ca214e4a3dc445116f9066b2ea9d06ffced,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706647499516969258,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbdxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82328394-34a6-476e-994b-8469c1cd370f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c130f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15595b34a557919456566f890b263d1b2ec3d14ab43bd764942f7538fafd743c,PodSandboxId:b4e9153ebe1b813f286c161fe8b9bd7c104eb49b5d427340a0a522c12804d9e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706647498994371878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7qhmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03050fc6-39c5-45fa-8fc0-fd41a78392f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2481486e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e776ff23c68223bb4ff5b75f74ef36335945d9c5b39fba0cce4470586ca7211,PodSandboxId:5322c11b7400ef45d2bbbb8d63823a8744887bbbc6e1e06baa306bb090531f3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706647473520200673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8546a5b70f7d75b0ec40caabe5c78413,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d0a5824a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5edef1c3bea3d540bb83c092f2f450ba4092659166aa785addc4288c2d06e516,PodSandboxId:19e908cc9cdeb9d5ff9aa1c1288dcd3ef2cff34c33d908eed5a808195f231353,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706647472051324526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3acd35d56c0f1b6dea95347cb06d05926b34dca9b2d90a1a9e23e04ea99abd48,PodSandboxId:a0fb4944a8a53f907b174cea6e6754b33d5471e164eeab49382797657dbfd6a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706647471871528939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f169069135e0a2b8eb5fc8f9181,},Annotations:map[string]string{io.kubern
etes.container.hash: e97c633e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44810126fec013f866d98b5fbc8a24ce47c290be31393e33afe9433d5b72f51,PodSandboxId:4df7291754b09b207030069a01d7d857e192de5dcaa8ab49665a205db74eba90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706647471736787731,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-150971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e8642aac-0781-4fd4-8885-4c900e31abc4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a18f05c5071cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 minutes ago      Running             storage-provisioner       0                   0bcb2ebe732ef       storage-provisioner
	9caea105ac6df       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   12 minutes ago      Running             kube-proxy                0                   8550dcc0516f9       kube-proxy-zbdxm
	15595b34a5579       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   12 minutes ago      Running             coredns                   0                   b4e9153ebe1b8       coredns-5644d7b6d9-7qhmc
	9e776ff23c682       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   13 minutes ago      Running             etcd                      0                   5322c11b7400e       etcd-old-k8s-version-150971
	5edef1c3bea3d       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   13 minutes ago      Running             kube-scheduler            0                   19e908cc9cdeb       kube-scheduler-old-k8s-version-150971
	3acd35d56c0f1       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   13 minutes ago      Running             kube-apiserver            0                   a0fb4944a8a53       kube-apiserver-old-k8s-version-150971
	c44810126fec0       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   13 minutes ago      Running             kube-controller-manager   0                   4df7291754b09       kube-controller-manager-old-k8s-version-150971
	
	
	==> coredns [15595b34a557919456566f890b263d1b2ec3d14ab43bd764942f7538fafd743c] <==
	.:53
	2024-01-30T20:44:59.299Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2024-01-30T20:44:59.299Z [INFO] CoreDNS-1.6.2
	2024-01-30T20:44:59.299Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-30T20:44:59.314Z [INFO] 127.0.0.1:56994 - 48072 "HINFO IN 7427872625022628517.7266604427229779466. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015095819s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-150971
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-150971
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218
	                    minikube.k8s.io/name=old-k8s-version-150971
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T20_44_42_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 20:44:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 20:57:37 +0000   Tue, 30 Jan 2024 20:44:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 20:57:37 +0000   Tue, 30 Jan 2024 20:44:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 20:57:37 +0000   Tue, 30 Jan 2024 20:44:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 20:57:37 +0000   Tue, 30 Jan 2024 20:44:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.16
	  Hostname:    old-k8s-version-150971
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 2bfe980287ab43929699b829e9c9d14b
	 System UUID:                2bfe9802-87ab-4392-9699-b829e9c9d14b
	 Boot ID:                    0d16b4b8-7f22-45dc-9866-42e9c8b3f5ef
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-7qhmc                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                etcd-old-k8s-version-150971                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-apiserver-old-k8s-version-150971             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-controller-manager-old-k8s-version-150971    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-zbdxm                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-150971             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                metrics-server-74d5856cc6-22948                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet, old-k8s-version-150971     Node old-k8s-version-150971 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet, old-k8s-version-150971     Node old-k8s-version-150971 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet, old-k8s-version-150971     Node old-k8s-version-150971 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-150971  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan30 20:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070534] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.759042] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.277267] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.145830] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000008] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jan30 20:39] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.975604] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.130209] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.163264] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.147872] systemd-fstab-generator[691]: Ignoring "noauto" for root device
	[  +0.263126] systemd-fstab-generator[715]: Ignoring "noauto" for root device
	[ +19.175912] systemd-fstab-generator[1038]: Ignoring "noauto" for root device
	[  +0.419326] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +25.227125] kauditd_printk_skb: 20 callbacks suppressed
	[Jan30 20:40] hrtimer: interrupt took 4310528 ns
	[Jan30 20:44] systemd-fstab-generator[3200]: Ignoring "noauto" for root device
	[  +0.667856] kauditd_printk_skb: 8 callbacks suppressed
	[Jan30 20:45] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [9e776ff23c68223bb4ff5b75f74ef36335945d9c5b39fba0cce4470586ca7211] <==
	2024-01-30 20:44:33.644289 I | raft: b6c76b3131c1024 became follower at term 0
	2024-01-30 20:44:33.644308 I | raft: newRaft b6c76b3131c1024 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2024-01-30 20:44:33.644323 I | raft: b6c76b3131c1024 became follower at term 1
	2024-01-30 20:44:33.651330 W | auth: simple token is not cryptographically signed
	2024-01-30 20:44:33.656676 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-30 20:44:33.657611 I | etcdserver: b6c76b3131c1024 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2024-01-30 20:44:33.657916 I | etcdserver/membership: added member b6c76b3131c1024 [https://192.168.39.16:2380] to cluster cad58bbf0f3daddf
	2024-01-30 20:44:33.659560 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-30 20:44:33.660032 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-30 20:44:33.660166 I | embed: listening for metrics on http://192.168.39.16:2381
	2024-01-30 20:44:34.044765 I | raft: b6c76b3131c1024 is starting a new election at term 1
	2024-01-30 20:44:34.044906 I | raft: b6c76b3131c1024 became candidate at term 2
	2024-01-30 20:44:34.045062 I | raft: b6c76b3131c1024 received MsgVoteResp from b6c76b3131c1024 at term 2
	2024-01-30 20:44:34.045189 I | raft: b6c76b3131c1024 became leader at term 2
	2024-01-30 20:44:34.045285 I | raft: raft.node: b6c76b3131c1024 elected leader b6c76b3131c1024 at term 2
	2024-01-30 20:44:34.045674 I | etcdserver: setting up the initial cluster version to 3.3
	2024-01-30 20:44:34.047297 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-30 20:44:34.047343 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-30 20:44:34.047377 I | etcdserver: published {Name:old-k8s-version-150971 ClientURLs:[https://192.168.39.16:2379]} to cluster cad58bbf0f3daddf
	2024-01-30 20:44:34.047383 I | embed: ready to serve client requests
	2024-01-30 20:44:34.047784 I | embed: ready to serve client requests
	2024-01-30 20:44:34.048719 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-30 20:44:34.050922 I | embed: serving client requests on 192.168.39.16:2379
	2024-01-30 20:54:34.066909 I | mvcc: store.index: compact 664
	2024-01-30 20:54:34.070586 I | mvcc: finished scheduled compaction at 664 (took 3.171275ms)
	
	
	==> kernel <==
	 20:57:44 up 18 min,  0 users,  load average: 0.07, 0.14, 0.16
	Linux old-k8s-version-150971 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [3acd35d56c0f1b6dea95347cb06d05926b34dca9b2d90a1a9e23e04ea99abd48] <==
	I0130 20:50:38.306188       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 20:50:38.306542       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 20:50:38.306623       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:50:38.306636       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 20:52:38.307157       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 20:52:38.307629       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 20:52:38.307717       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:52:38.307743       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 20:54:38.308883       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 20:54:38.309168       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 20:54:38.309257       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:54:38.309279       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 20:55:38.309618       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 20:55:38.309789       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 20:55:38.309886       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:55:38.309914       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 20:57:38.310231       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 20:57:38.310686       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 20:57:38.310776       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:57:38.310799       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c44810126fec013f866d98b5fbc8a24ce47c290be31393e33afe9433d5b72f51] <==
	W0130 20:51:21.260817       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:51:30.547201       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:51:53.264663       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:52:00.799586       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:52:25.266772       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:52:31.051566       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:52:57.269679       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:53:01.303524       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:53:29.271844       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:53:31.555398       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:54:01.274155       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:54:01.807212       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0130 20:54:32.060322       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:54:33.276548       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:55:02.312382       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:55:05.278879       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:55:32.564742       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:55:37.280969       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:56:02.817022       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:56:09.283104       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:56:33.070062       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:56:41.284955       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:57:03.322568       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 20:57:13.287205       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 20:57:33.574655       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [9caea105ac6df617e081cecc81dd4ba65c9a637c619374ac763237d862d8af98] <==
	W0130 20:44:59.896676       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0130 20:44:59.949784       1 node.go:135] Successfully retrieved node IP: 192.168.39.16
	I0130 20:44:59.949847       1 server_others.go:149] Using iptables Proxier.
	I0130 20:44:59.963781       1 server.go:529] Version: v1.16.0
	I0130 20:44:59.974254       1 config.go:131] Starting endpoints config controller
	I0130 20:44:59.976908       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0130 20:44:59.977182       1 config.go:313] Starting service config controller
	I0130 20:44:59.977418       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0130 20:45:00.093002       1 shared_informer.go:204] Caches are synced for service config 
	I0130 20:45:00.093282       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [5edef1c3bea3d540bb83c092f2f450ba4092659166aa785addc4288c2d06e516] <==
	I0130 20:44:37.306193       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0130 20:44:37.307006       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0130 20:44:37.366779       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0130 20:44:37.367041       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0130 20:44:37.367216       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0130 20:44:37.367302       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0130 20:44:37.367362       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0130 20:44:37.367423       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0130 20:44:37.367574       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0130 20:44:37.372780       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 20:44:37.372868       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0130 20:44:37.376604       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0130 20:44:37.376797       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 20:44:38.369553       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0130 20:44:38.373627       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0130 20:44:38.376603       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0130 20:44:38.377947       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0130 20:44:38.379959       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0130 20:44:38.381126       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0130 20:44:38.385575       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0130 20:44:38.386563       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 20:44:38.387557       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0130 20:44:38.389008       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0130 20:44:38.392386       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 20:44:56.846806       1 factory.go:585] pod is already present in the activeQ
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 20:38:58 UTC, ends at Tue 2024-01-30 20:57:44 UTC. --
	Jan 30 20:53:27 old-k8s-version-150971 kubelet[3206]: E0130 20:53:27.398873    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:53:42 old-k8s-version-150971 kubelet[3206]: E0130 20:53:42.399624    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:53:55 old-k8s-version-150971 kubelet[3206]: E0130 20:53:55.399991    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:54:06 old-k8s-version-150971 kubelet[3206]: E0130 20:54:06.398229    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:54:19 old-k8s-version-150971 kubelet[3206]: E0130 20:54:19.398632    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:54:30 old-k8s-version-150971 kubelet[3206]: E0130 20:54:30.506299    3206 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 30 20:54:33 old-k8s-version-150971 kubelet[3206]: E0130 20:54:33.398293    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:54:44 old-k8s-version-150971 kubelet[3206]: E0130 20:54:44.399863    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:54:57 old-k8s-version-150971 kubelet[3206]: E0130 20:54:57.398741    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:55:12 old-k8s-version-150971 kubelet[3206]: E0130 20:55:12.399663    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:55:23 old-k8s-version-150971 kubelet[3206]: E0130 20:55:23.398565    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:55:37 old-k8s-version-150971 kubelet[3206]: E0130 20:55:37.413553    3206 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 30 20:55:37 old-k8s-version-150971 kubelet[3206]: E0130 20:55:37.413621    3206 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 30 20:55:37 old-k8s-version-150971 kubelet[3206]: E0130 20:55:37.413681    3206 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 30 20:55:37 old-k8s-version-150971 kubelet[3206]: E0130 20:55:37.413713    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 30 20:55:51 old-k8s-version-150971 kubelet[3206]: E0130 20:55:51.399082    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:56:02 old-k8s-version-150971 kubelet[3206]: E0130 20:56:02.399361    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:56:17 old-k8s-version-150971 kubelet[3206]: E0130 20:56:17.398287    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:56:29 old-k8s-version-150971 kubelet[3206]: E0130 20:56:29.398384    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:56:40 old-k8s-version-150971 kubelet[3206]: E0130 20:56:40.400944    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:56:54 old-k8s-version-150971 kubelet[3206]: E0130 20:56:54.398645    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:57:05 old-k8s-version-150971 kubelet[3206]: E0130 20:57:05.399214    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:57:20 old-k8s-version-150971 kubelet[3206]: E0130 20:57:20.400215    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:57:31 old-k8s-version-150971 kubelet[3206]: E0130 20:57:31.398696    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 20:57:43 old-k8s-version-150971 kubelet[3206]: E0130 20:57:43.398954    3206 pod_workers.go:191] Error syncing pod 89eebcb3-0362-49b7-8074-6060ed865fc7 ("metrics-server-74d5856cc6-22948_kube-system(89eebcb3-0362-49b7-8074-6060ed865fc7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [a18f05c5071cc3d5c2acc3d8ef16ae221090aa663bbb0034595edf8fe754d1c7] <==
	I0130 20:45:00.174576       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 20:45:00.190353       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 20:45:00.192193       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 20:45:00.207961       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 20:45:00.208954       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-150971_ba0f01f0-87aa-4fec-9246-93103f198f70!
	I0130 20:45:00.218859       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"78efc787-05b9-458a-b56a-6a3ffd7f6b0a", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-150971_ba0f01f0-87aa-4fec-9246-93103f198f70 became leader
	I0130 20:45:00.309893       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-150971_ba0f01f0-87aa-4fec-9246-93103f198f70!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-150971 -n old-k8s-version-150971
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-150971 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-22948
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-150971 describe pod metrics-server-74d5856cc6-22948
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-150971 describe pod metrics-server-74d5856cc6-22948: exit status 1 (66.361833ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-22948" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-150971 describe pod metrics-server-74d5856cc6-22948: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (138.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (168.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-877742 -n default-k8s-diff-port-877742
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-30 21:00:10.99590548 +0000 UTC m=+5848.072875288
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-877742 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-877742 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.497µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-877742 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-877742 -n default-k8s-diff-port-877742
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-877742 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-877742 logs -n 25: (1.446314603s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p embed-certs-208583                 | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-473743                                   | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:44 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-150971        | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC | 30 Jan 24 20:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-877742       | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-877742 | jenkins | v1.32.0 | 30 Jan 24 20:34 UTC | 30 Jan 24 20:48 UTC |
	|         | default-k8s-diff-port-877742                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-150971             | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:36 UTC | 30 Jan 24 20:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-150971                              | old-k8s-version-150971       | jenkins | v1.32.0 | 30 Jan 24 20:57 UTC | 30 Jan 24 20:57 UTC |
	| start   | -p newest-cni-564644 --memory=2200 --alsologtostderr   | newest-cni-564644            | jenkins | v1.32.0 | 30 Jan 24 20:57 UTC | 30 Jan 24 20:58 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-473743                                   | no-preload-473743            | jenkins | v1.32.0 | 30 Jan 24 20:57 UTC | 30 Jan 24 20:57 UTC |
	| start   | -p auto-997045 --memory=3072                           | auto-997045                  | jenkins | v1.32.0 | 30 Jan 24 20:57 UTC | 30 Jan 24 21:00 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-208583                                  | embed-certs-208583           | jenkins | v1.32.0 | 30 Jan 24 20:58 UTC | 30 Jan 24 20:58 UTC |
	| start   | -p kindnet-997045                                      | kindnet-997045               | jenkins | v1.32.0 | 30 Jan 24 20:58 UTC | 30 Jan 24 20:59 UTC |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-564644             | newest-cni-564644            | jenkins | v1.32.0 | 30 Jan 24 20:58 UTC | 30 Jan 24 20:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-564644                                   | newest-cni-564644            | jenkins | v1.32.0 | 30 Jan 24 20:58 UTC | 30 Jan 24 20:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-564644                  | newest-cni-564644            | jenkins | v1.32.0 | 30 Jan 24 20:59 UTC | 30 Jan 24 20:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-564644 --memory=2200 --alsologtostderr   | newest-cni-564644            | jenkins | v1.32.0 | 30 Jan 24 20:59 UTC | 30 Jan 24 21:00 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| ssh     | -p kindnet-997045 pgrep -a                             | kindnet-997045               | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC | 30 Jan 24 21:00 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| ssh     | -p auto-997045 pgrep -a                                | auto-997045                  | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC | 30 Jan 24 21:00 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| image   | newest-cni-564644 image list                           | newest-cni-564644            | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC | 30 Jan 24 21:00 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-564644                                   | newest-cni-564644            | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC | 30 Jan 24 21:00 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-564644                                   | newest-cni-564644            | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC | 30 Jan 24 21:00 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-564644                                   | newest-cni-564644            | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC | 30 Jan 24 21:00 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 20:59:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 20:59:10.101301   51620 out.go:296] Setting OutFile to fd 1 ...
	I0130 20:59:10.101579   51620 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:59:10.101591   51620 out.go:309] Setting ErrFile to fd 2...
	I0130 20:59:10.101596   51620 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:59:10.101830   51620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 20:59:10.102401   51620 out.go:303] Setting JSON to false
	I0130 20:59:10.103334   51620 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6095,"bootTime":1706642255,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 20:59:10.103408   51620 start.go:138] virtualization: kvm guest
	I0130 20:59:10.105805   51620 out.go:177] * [newest-cni-564644] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 20:59:10.107122   51620 out.go:177]   - MINIKUBE_LOCATION=18007
	I0130 20:59:10.107147   51620 notify.go:220] Checking for updates...
	I0130 20:59:10.108581   51620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 20:59:10.110193   51620 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:59:10.111612   51620 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 20:59:10.113002   51620 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 20:59:10.114308   51620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 20:59:10.115867   51620 config.go:182] Loaded profile config "newest-cni-564644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 20:59:10.116257   51620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:59:10.116295   51620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:59:10.135466   51620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33973
	I0130 20:59:10.135889   51620 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:59:10.136465   51620 main.go:141] libmachine: Using API Version  1
	I0130 20:59:10.136494   51620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:59:10.136823   51620 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:59:10.137036   51620 main.go:141] libmachine: (newest-cni-564644) Calling .DriverName
	I0130 20:59:10.137300   51620 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 20:59:10.137581   51620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:59:10.137621   51620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:59:10.151037   51620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34505
	I0130 20:59:10.151437   51620 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:59:10.151860   51620 main.go:141] libmachine: Using API Version  1
	I0130 20:59:10.151878   51620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:59:10.152168   51620 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:59:10.152370   51620 main.go:141] libmachine: (newest-cni-564644) Calling .DriverName
	I0130 20:59:10.187059   51620 out.go:177] * Using the kvm2 driver based on existing profile
	I0130 20:59:10.188384   51620 start.go:298] selected driver: kvm2
	I0130 20:59:10.188400   51620 start.go:902] validating driver "kvm2" against &{Name:newest-cni-564644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-564644 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node
_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:59:10.188490   51620 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 20:59:10.189164   51620 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 20:59:10.189230   51620 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18007-4458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 20:59:10.204094   51620 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 20:59:10.204439   51620 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0130 20:59:10.204498   51620 cni.go:84] Creating CNI manager for ""
	I0130 20:59:10.204507   51620 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:59:10.204525   51620 start_flags.go:321] config:
	{Name:newest-cni-564644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-564644 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:59:10.204660   51620 iso.go:125] acquiring lock: {Name:mk072ab123730f3058e85a91672f85e887bd47af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 20:59:10.206520   51620 out.go:177] * Starting control plane node newest-cni-564644 in cluster newest-cni-564644
	I0130 20:59:11.215146   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:11.215683   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has current primary IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:11.215701   50967 main.go:141] libmachine: (kindnet-997045) Found IP for machine: 192.168.61.163
	I0130 20:59:11.215711   50967 main.go:141] libmachine: (kindnet-997045) Reserving static IP address...
	I0130 20:59:11.216022   50967 main.go:141] libmachine: (kindnet-997045) DBG | unable to find host DHCP lease matching {name: "kindnet-997045", mac: "52:54:00:e3:7e:1a", ip: "192.168.61.163"} in network mk-kindnet-997045
	I0130 20:59:11.290666   50967 main.go:141] libmachine: (kindnet-997045) DBG | Getting to WaitForSSH function...
	I0130 20:59:11.290701   50967 main.go:141] libmachine: (kindnet-997045) Reserved static IP address: 192.168.61.163
	I0130 20:59:11.290716   50967 main.go:141] libmachine: (kindnet-997045) Waiting for SSH to be available...
	I0130 20:59:11.293092   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:11.293453   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:11.293481   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:11.293577   50967 main.go:141] libmachine: (kindnet-997045) DBG | Using SSH client type: external
	I0130 20:59:11.293614   50967 main.go:141] libmachine: (kindnet-997045) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/kindnet-997045/id_rsa (-rw-------)
	I0130 20:59:11.293642   50967 main.go:141] libmachine: (kindnet-997045) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/kindnet-997045/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:59:11.293657   50967 main.go:141] libmachine: (kindnet-997045) DBG | About to run SSH command:
	I0130 20:59:11.293674   50967 main.go:141] libmachine: (kindnet-997045) DBG | exit 0
	I0130 20:59:11.382849   50967 main.go:141] libmachine: (kindnet-997045) DBG | SSH cmd err, output: <nil>: 
	I0130 20:59:11.383149   50967 main.go:141] libmachine: (kindnet-997045) KVM machine creation complete!
	I0130 20:59:11.383444   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetConfigRaw
	I0130 20:59:11.383988   50967 main.go:141] libmachine: (kindnet-997045) Calling .DriverName
	I0130 20:59:11.384203   50967 main.go:141] libmachine: (kindnet-997045) Calling .DriverName
	I0130 20:59:11.384380   50967 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0130 20:59:11.384403   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetState
	I0130 20:59:11.385682   50967 main.go:141] libmachine: Detecting operating system of created instance...
	I0130 20:59:11.385696   50967 main.go:141] libmachine: Waiting for SSH to be available...
	I0130 20:59:11.385704   50967 main.go:141] libmachine: Getting to WaitForSSH function...
	I0130 20:59:11.385713   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHHostname
	I0130 20:59:11.388400   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:11.388832   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:kindnet-997045 Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:11.388862   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:11.388985   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHPort
	I0130 20:59:11.389140   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHKeyPath
	I0130 20:59:11.389302   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHKeyPath
	I0130 20:59:11.389454   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHUsername
	I0130 20:59:11.389628   50967 main.go:141] libmachine: Using SSH client type: native
	I0130 20:59:11.390022   50967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.163 22 <nil> <nil>}
	I0130 20:59:11.390038   50967 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0130 20:59:11.506936   50967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:59:11.506968   50967 main.go:141] libmachine: Detecting the provisioner...
	I0130 20:59:11.506984   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHHostname
	I0130 20:59:11.509601   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:11.510005   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:kindnet-997045 Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:11.510040   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:11.510223   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHPort
	I0130 20:59:11.510421   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHKeyPath
	I0130 20:59:11.510589   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHKeyPath
	I0130 20:59:11.510725   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHUsername
	I0130 20:59:11.510948   50967 main.go:141] libmachine: Using SSH client type: native
	I0130 20:59:11.511319   50967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.163 22 <nil> <nil>}
	I0130 20:59:11.511335   50967 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0130 20:59:11.627986   50967 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0130 20:59:11.628080   50967 main.go:141] libmachine: found compatible host: buildroot
	I0130 20:59:11.628096   50967 main.go:141] libmachine: Provisioning with buildroot...
	I0130 20:59:11.628104   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetMachineName
	I0130 20:59:11.628413   50967 buildroot.go:166] provisioning hostname "kindnet-997045"
	I0130 20:59:11.628450   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetMachineName
	I0130 20:59:11.628646   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHHostname
	I0130 20:59:11.631321   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:11.631713   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:kindnet-997045 Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:11.631733   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:11.631904   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHPort
	I0130 20:59:11.632081   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHKeyPath
	I0130 20:59:11.632212   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHKeyPath
	I0130 20:59:11.632310   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHUsername
	I0130 20:59:11.632510   50967 main.go:141] libmachine: Using SSH client type: native
	I0130 20:59:11.632958   50967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.163 22 <nil> <nil>}
	I0130 20:59:11.632979   50967 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-997045 && echo "kindnet-997045" | sudo tee /etc/hostname
	I0130 20:59:11.760989   50967 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-997045
	
	I0130 20:59:11.761022   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHHostname
	I0130 20:59:11.763824   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:11.764193   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:kindnet-997045 Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:11.764229   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:11.764366   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHPort
	I0130 20:59:11.764547   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHKeyPath
	I0130 20:59:11.764704   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHKeyPath
	I0130 20:59:11.764842   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHUsername
	I0130 20:59:11.764977   50967 main.go:141] libmachine: Using SSH client type: native
	I0130 20:59:11.765274   50967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.163 22 <nil> <nil>}
	I0130 20:59:11.765290   50967 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-997045' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-997045/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-997045' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:59:11.890452   50967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:59:11.890484   50967 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:59:11.890514   50967 buildroot.go:174] setting up certificates
	I0130 20:59:11.890529   50967 provision.go:83] configureAuth start
	I0130 20:59:11.890546   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetMachineName
	I0130 20:59:11.890848   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetIP
	I0130 20:59:11.894047   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:11.894362   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:kindnet-997045 Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:11.894384   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:11.894571   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHHostname
	I0130 20:59:11.896806   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:11.897087   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:kindnet-997045 Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:11.897116   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:11.897235   50967 provision.go:138] copyHostCerts
	I0130 20:59:11.897286   50967 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:59:11.897296   50967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:59:11.897346   50967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:59:11.897423   50967 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:59:11.897430   50967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:59:11.897451   50967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:59:11.897500   50967 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:59:11.897507   50967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:59:11.897523   50967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:59:11.897568   50967 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.kindnet-997045 san=[192.168.61.163 192.168.61.163 localhost 127.0.0.1 minikube kindnet-997045]
	I0130 20:59:12.120111   50967 provision.go:172] copyRemoteCerts
	I0130 20:59:12.120165   50967 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:59:12.120189   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHHostname
	I0130 20:59:12.123048   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:12.123433   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:kindnet-997045 Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:12.123470   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:12.123641   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHPort
	I0130 20:59:12.123850   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHKeyPath
	I0130 20:59:12.124044   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHUsername
	I0130 20:59:12.124208   50967 sshutil.go:53] new ssh client: &{IP:192.168.61.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/kindnet-997045/id_rsa Username:docker}
	I0130 20:59:12.213403   50967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:59:12.237580   50967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0130 20:59:12.259255   50967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 20:59:12.281327   50967 provision.go:86] duration metric: configureAuth took 390.781326ms
	I0130 20:59:12.281357   50967 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:59:12.281532   50967 config.go:182] Loaded profile config "kindnet-997045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:59:12.281606   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHHostname
	I0130 20:59:12.284296   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:12.284602   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:kindnet-997045 Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:12.284628   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:12.284823   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHPort
	I0130 20:59:12.285054   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHKeyPath
	I0130 20:59:12.285230   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHKeyPath
	I0130 20:59:12.285371   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHUsername
	I0130 20:59:12.285561   50967 main.go:141] libmachine: Using SSH client type: native
	I0130 20:59:12.285861   50967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.163 22 <nil> <nil>}
	I0130 20:59:12.285877   50967 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:59:12.610476   50967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:59:12.610507   50967 main.go:141] libmachine: Checking connection to Docker...
	I0130 20:59:12.610517   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetURL
	I0130 20:59:12.611889   50967 main.go:141] libmachine: (kindnet-997045) DBG | Using libvirt version 6000000
	I0130 20:59:12.614105   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:12.614477   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:kindnet-997045 Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:12.614518   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:12.614704   50967 main.go:141] libmachine: Docker is up and running!
	I0130 20:59:12.614719   50967 main.go:141] libmachine: Reticulating splines...
	I0130 20:59:12.614726   50967 client.go:171] LocalClient.Create took 26.685622163s
	I0130 20:59:12.614753   50967 start.go:167] duration metric: libmachine.API.Create for "kindnet-997045" took 26.685699398s
	I0130 20:59:12.614765   50967 start.go:300] post-start starting for "kindnet-997045" (driver="kvm2")
	I0130 20:59:12.614778   50967 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:59:12.614802   50967 main.go:141] libmachine: (kindnet-997045) Calling .DriverName
	I0130 20:59:12.615022   50967 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:59:12.615050   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHHostname
	I0130 20:59:12.617430   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:12.617793   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:kindnet-997045 Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:12.617827   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:12.618003   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHPort
	I0130 20:59:12.618185   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHKeyPath
	I0130 20:59:12.618349   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHUsername
	I0130 20:59:12.618495   50967 sshutil.go:53] new ssh client: &{IP:192.168.61.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/kindnet-997045/id_rsa Username:docker}
	I0130 20:59:10.208157   51620 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 20:59:10.208186   51620 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0130 20:59:10.208192   51620 cache.go:56] Caching tarball of preloaded images
	I0130 20:59:10.208258   51620 preload.go:174] Found /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 20:59:10.208268   51620 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0130 20:59:10.208379   51620 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/newest-cni-564644/config.json ...
	I0130 20:59:10.208550   51620 start.go:365] acquiring machines lock for newest-cni-564644: {Name:mk35f9b41721dcc5b4ed1bac56fc056fd14a541b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 20:59:12.868124   51620 start.go:369] acquired machines lock for "newest-cni-564644" in 2.659506469s
	I0130 20:59:12.868175   51620 start.go:96] Skipping create...Using existing machine configuration
	I0130 20:59:12.868185   51620 fix.go:54] fixHost starting: 
	I0130 20:59:12.868542   51620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:59:12.868583   51620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:59:12.884853   51620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45069
	I0130 20:59:12.885298   51620 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:59:12.885890   51620 main.go:141] libmachine: Using API Version  1
	I0130 20:59:12.885919   51620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:59:12.886262   51620 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:59:12.886450   51620 main.go:141] libmachine: (newest-cni-564644) Calling .DriverName
	I0130 20:59:12.886599   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetState
	I0130 20:59:12.888113   51620 fix.go:102] recreateIfNeeded on newest-cni-564644: state=Stopped err=<nil>
	I0130 20:59:12.888148   51620 main.go:141] libmachine: (newest-cni-564644) Calling .DriverName
	W0130 20:59:12.888300   51620 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 20:59:12.890859   51620 out.go:177] * Restarting existing kvm2 VM for "newest-cni-564644" ...
	I0130 20:59:12.705424   50967 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:59:12.709569   50967 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:59:12.709592   50967 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:59:12.709684   50967 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:59:12.709769   50967 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:59:12.709871   50967 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:59:12.718850   50967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:59:12.742068   50967 start.go:303] post-start completed in 127.292383ms
	I0130 20:59:12.742116   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetConfigRaw
	I0130 20:59:12.742671   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetIP
	I0130 20:59:12.745034   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:12.745414   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:kindnet-997045 Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:12.745450   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:12.745696   50967 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/config.json ...
	I0130 20:59:12.745902   50967 start.go:128] duration metric: createHost completed in 26.837589169s
	I0130 20:59:12.745928   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHHostname
	I0130 20:59:12.748215   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:12.748579   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:kindnet-997045 Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:12.748599   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:12.748739   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHPort
	I0130 20:59:12.748944   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHKeyPath
	I0130 20:59:12.749097   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHKeyPath
	I0130 20:59:12.749241   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHUsername
	I0130 20:59:12.749374   50967 main.go:141] libmachine: Using SSH client type: native
	I0130 20:59:12.749666   50967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.163 22 <nil> <nil>}
	I0130 20:59:12.749677   50967 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:59:12.867953   50967 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706648352.847083500
	
	I0130 20:59:12.867971   50967 fix.go:206] guest clock: 1706648352.847083500
	I0130 20:59:12.867981   50967 fix.go:219] Guest: 2024-01-30 20:59:12.8470835 +0000 UTC Remote: 2024-01-30 20:59:12.745914972 +0000 UTC m=+60.160589778 (delta=101.168528ms)
	I0130 20:59:12.868003   50967 fix.go:190] guest clock delta is within tolerance: 101.168528ms
	I0130 20:59:12.868009   50967 start.go:83] releasing machines lock for "kindnet-997045", held for 26.959877671s
	I0130 20:59:12.868040   50967 main.go:141] libmachine: (kindnet-997045) Calling .DriverName
	I0130 20:59:12.868325   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetIP
	I0130 20:59:12.870917   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:12.871306   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:kindnet-997045 Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:12.871332   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:12.871506   50967 main.go:141] libmachine: (kindnet-997045) Calling .DriverName
	I0130 20:59:12.872031   50967 main.go:141] libmachine: (kindnet-997045) Calling .DriverName
	I0130 20:59:12.872224   50967 main.go:141] libmachine: (kindnet-997045) Calling .DriverName
	I0130 20:59:12.872306   50967 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:59:12.872345   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHHostname
	I0130 20:59:12.872464   50967 ssh_runner.go:195] Run: cat /version.json
	I0130 20:59:12.872513   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHHostname
	I0130 20:59:12.875187   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:12.875457   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:12.875621   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:kindnet-997045 Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:12.875662   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:12.875806   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHPort
	I0130 20:59:12.875885   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:kindnet-997045 Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:12.875908   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:12.875972   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHKeyPath
	I0130 20:59:12.876044   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHPort
	I0130 20:59:12.876143   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHUsername
	I0130 20:59:12.876206   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHKeyPath
	I0130 20:59:12.876273   50967 sshutil.go:53] new ssh client: &{IP:192.168.61.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/kindnet-997045/id_rsa Username:docker}
	I0130 20:59:12.876354   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHUsername
	I0130 20:59:12.876478   50967 sshutil.go:53] new ssh client: &{IP:192.168.61.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/kindnet-997045/id_rsa Username:docker}
	I0130 20:59:12.990672   50967 ssh_runner.go:195] Run: systemctl --version
	I0130 20:59:12.997614   50967 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:59:13.169332   50967 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:59:13.177403   50967 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:59:13.177473   50967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:59:13.191938   50967 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:59:13.191961   50967 start.go:475] detecting cgroup driver to use...
	I0130 20:59:13.192017   50967 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:59:13.208256   50967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:59:13.221224   50967 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:59:13.221283   50967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:59:13.234938   50967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:59:13.249517   50967 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:59:13.355324   50967 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:59:13.485748   50967 docker.go:233] disabling docker service ...
	I0130 20:59:13.485817   50967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:59:13.499656   50967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:59:13.512043   50967 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:59:13.642876   50967 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:59:13.774891   50967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:59:13.792934   50967 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:59:13.812144   50967 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:59:13.812211   50967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:59:13.821036   50967 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:59:13.821115   50967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:59:13.830290   50967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:59:13.839223   50967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:59:13.847883   50967 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:59:13.857088   50967 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:59:13.866333   50967 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:59:13.866403   50967 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:59:13.881442   50967 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:59:13.891352   50967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:59:14.008869   50967 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:59:14.208616   50967 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:59:14.208687   50967 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:59:14.215301   50967 start.go:543] Will wait 60s for crictl version
	I0130 20:59:14.215367   50967 ssh_runner.go:195] Run: which crictl
	I0130 20:59:14.219595   50967 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:59:14.264778   50967 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:59:14.264849   50967 ssh_runner.go:195] Run: crio --version
	I0130 20:59:14.321771   50967 ssh_runner.go:195] Run: crio --version
	I0130 20:59:14.387928   50967 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 20:59:09.981592   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:10.481241   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:10.981772   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:11.481125   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:11.981419   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:12.481797   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:12.981367   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:13.481387   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:13.981576   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:14.481636   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:12.891976   51620 main.go:141] libmachine: (newest-cni-564644) Calling .Start
	I0130 20:59:12.892156   51620 main.go:141] libmachine: (newest-cni-564644) Ensuring networks are active...
	I0130 20:59:12.892862   51620 main.go:141] libmachine: (newest-cni-564644) Ensuring network default is active
	I0130 20:59:12.893235   51620 main.go:141] libmachine: (newest-cni-564644) Ensuring network mk-newest-cni-564644 is active
	I0130 20:59:12.893701   51620 main.go:141] libmachine: (newest-cni-564644) Getting domain xml...
	I0130 20:59:12.894431   51620 main.go:141] libmachine: (newest-cni-564644) Creating domain...
	I0130 20:59:14.196120   51620 main.go:141] libmachine: (newest-cni-564644) Waiting to get IP...
	I0130 20:59:14.197139   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:14.197611   51620 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:59:14.197673   51620 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:59:14.197587   51683 retry.go:31] will retry after 212.454638ms: waiting for machine to come up
	I0130 20:59:14.412270   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:14.412699   51620 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:59:14.412726   51620 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:59:14.412670   51683 retry.go:31] will retry after 284.910568ms: waiting for machine to come up
	I0130 20:59:14.699395   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:14.700171   51620 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:59:14.700203   51620 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:59:14.700121   51683 retry.go:31] will retry after 430.184252ms: waiting for machine to come up
	I0130 20:59:14.389346   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetIP
	I0130 20:59:14.391992   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:14.392406   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:kindnet-997045 Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:14.392451   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:14.392577   50967 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0130 20:59:14.396773   50967 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:59:14.411001   50967 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 20:59:14.411068   50967 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:59:14.450943   50967 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 20:59:14.451020   50967 ssh_runner.go:195] Run: which lz4
	I0130 20:59:14.455349   50967 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 20:59:14.459844   50967 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:59:14.459874   50967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 20:59:16.300287   50967 crio.go:444] Took 1.844971 seconds to copy over tarball
	I0130 20:59:16.300355   50967 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 20:59:14.981076   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:15.481774   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:15.981739   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:16.481699   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:16.981755   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:17.481079   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:17.981809   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:18.481405   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:18.981736   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:19.481429   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:15.131931   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:15.132409   51620 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:59:15.132440   51620 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:59:15.132361   51683 retry.go:31] will retry after 507.117112ms: waiting for machine to come up
	I0130 20:59:15.641113   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:15.641821   51620 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:59:15.641844   51620 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:59:15.641757   51683 retry.go:31] will retry after 758.249661ms: waiting for machine to come up
	I0130 20:59:16.401324   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:16.401849   51620 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:59:16.401876   51620 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:59:16.401804   51683 retry.go:31] will retry after 880.938241ms: waiting for machine to come up
	I0130 20:59:17.284035   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:17.284555   51620 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:59:17.284589   51620 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:59:17.284498   51683 retry.go:31] will retry after 1.044909044s: waiting for machine to come up
	I0130 20:59:18.330856   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:18.331411   51620 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:59:18.331441   51620 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:59:18.331357   51683 retry.go:31] will retry after 983.681906ms: waiting for machine to come up
	I0130 20:59:19.317708   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:19.318248   51620 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:59:19.318284   51620 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:59:19.318204   51683 retry.go:31] will retry after 1.60183402s: waiting for machine to come up
	I0130 20:59:19.982010   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:20.481059   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:20.981687   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:21.481433   50715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:21.639725   50715 kubeadm.go:1088] duration metric: took 13.36152988s to wait for elevateKubeSystemPrivileges.
	I0130 20:59:21.639757   50715 kubeadm.go:406] StartCluster complete in 27.56708838s
	I0130 20:59:21.639775   50715 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:59:21.639839   50715 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:59:21.641333   50715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:59:21.641559   50715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:59:21.641647   50715 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:59:21.641762   50715 addons.go:69] Setting storage-provisioner=true in profile "auto-997045"
	I0130 20:59:21.641783   50715 addons.go:234] Setting addon storage-provisioner=true in "auto-997045"
	I0130 20:59:21.641844   50715 host.go:66] Checking if "auto-997045" exists ...
	I0130 20:59:21.641842   50715 addons.go:69] Setting default-storageclass=true in profile "auto-997045"
	I0130 20:59:21.641885   50715 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-997045"
	I0130 20:59:21.642344   50715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:59:21.642364   50715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:59:21.642398   50715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:59:21.642405   50715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:59:21.642727   50715 config.go:182] Loaded profile config "auto-997045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:59:21.660327   50715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45705
	I0130 20:59:21.660417   50715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43537
	I0130 20:59:21.660892   50715 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:59:21.660901   50715 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:59:21.661395   50715 main.go:141] libmachine: Using API Version  1
	I0130 20:59:21.661419   50715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:59:21.661398   50715 main.go:141] libmachine: Using API Version  1
	I0130 20:59:21.661483   50715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:59:21.661846   50715 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:59:21.661907   50715 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:59:21.662022   50715 main.go:141] libmachine: (auto-997045) Calling .GetState
	I0130 20:59:21.662534   50715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:59:21.662583   50715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:59:21.665262   50715 addons.go:234] Setting addon default-storageclass=true in "auto-997045"
	I0130 20:59:21.665303   50715 host.go:66] Checking if "auto-997045" exists ...
	I0130 20:59:21.665716   50715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:59:21.665751   50715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:59:21.683749   50715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45513
	I0130 20:59:21.683878   50715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44485
	I0130 20:59:21.684272   50715 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:59:21.684363   50715 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:59:21.684847   50715 main.go:141] libmachine: Using API Version  1
	I0130 20:59:21.684864   50715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:59:21.684996   50715 main.go:141] libmachine: Using API Version  1
	I0130 20:59:21.685017   50715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:59:21.685224   50715 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:59:21.685448   50715 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:59:21.685619   50715 main.go:141] libmachine: (auto-997045) Calling .GetState
	I0130 20:59:21.685777   50715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:59:21.685796   50715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:59:21.687733   50715 main.go:141] libmachine: (auto-997045) Calling .DriverName
	I0130 20:59:21.690077   50715 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:59:21.691596   50715 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:59:21.691617   50715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:59:21.691634   50715 main.go:141] libmachine: (auto-997045) Calling .GetSSHHostname
	I0130 20:59:21.695120   50715 main.go:141] libmachine: (auto-997045) DBG | domain auto-997045 has defined MAC address 52:54:00:a1:dd:b0 in network mk-auto-997045
	I0130 20:59:21.695590   50715 main.go:141] libmachine: (auto-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:dd:b0", ip: ""} in network mk-auto-997045: {Iface:virbr2 ExpiryTime:2024-01-30 21:58:38 +0000 UTC Type:0 Mac:52:54:00:a1:dd:b0 Iaid: IPaddr:192.168.50.14 Prefix:24 Hostname:auto-997045 Clientid:01:52:54:00:a1:dd:b0}
	I0130 20:59:21.695619   50715 main.go:141] libmachine: (auto-997045) DBG | domain auto-997045 has defined IP address 192.168.50.14 and MAC address 52:54:00:a1:dd:b0 in network mk-auto-997045
	I0130 20:59:21.695829   50715 main.go:141] libmachine: (auto-997045) Calling .GetSSHPort
	I0130 20:59:21.697431   50715 main.go:141] libmachine: (auto-997045) Calling .GetSSHKeyPath
	I0130 20:59:21.697616   50715 main.go:141] libmachine: (auto-997045) Calling .GetSSHUsername
	I0130 20:59:21.697746   50715 sshutil.go:53] new ssh client: &{IP:192.168.50.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/auto-997045/id_rsa Username:docker}
	I0130 20:59:21.705131   50715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41959
	I0130 20:59:21.705642   50715 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:59:21.706176   50715 main.go:141] libmachine: Using API Version  1
	I0130 20:59:21.706192   50715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:59:21.706666   50715 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:59:21.706860   50715 main.go:141] libmachine: (auto-997045) Calling .GetState
	I0130 20:59:21.708716   50715 main.go:141] libmachine: (auto-997045) Calling .DriverName
	I0130 20:59:21.708933   50715 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:59:21.708945   50715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:59:21.708960   50715 main.go:141] libmachine: (auto-997045) Calling .GetSSHHostname
	I0130 20:59:21.712076   50715 main.go:141] libmachine: (auto-997045) DBG | domain auto-997045 has defined MAC address 52:54:00:a1:dd:b0 in network mk-auto-997045
	I0130 20:59:21.712572   50715 main.go:141] libmachine: (auto-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:dd:b0", ip: ""} in network mk-auto-997045: {Iface:virbr2 ExpiryTime:2024-01-30 21:58:38 +0000 UTC Type:0 Mac:52:54:00:a1:dd:b0 Iaid: IPaddr:192.168.50.14 Prefix:24 Hostname:auto-997045 Clientid:01:52:54:00:a1:dd:b0}
	I0130 20:59:21.712590   50715 main.go:141] libmachine: (auto-997045) DBG | domain auto-997045 has defined IP address 192.168.50.14 and MAC address 52:54:00:a1:dd:b0 in network mk-auto-997045
	I0130 20:59:21.712751   50715 main.go:141] libmachine: (auto-997045) Calling .GetSSHPort
	I0130 20:59:21.712927   50715 main.go:141] libmachine: (auto-997045) Calling .GetSSHKeyPath
	I0130 20:59:21.713087   50715 main.go:141] libmachine: (auto-997045) Calling .GetSSHUsername
	I0130 20:59:21.713207   50715 sshutil.go:53] new ssh client: &{IP:192.168.50.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/auto-997045/id_rsa Username:docker}
	I0130 20:59:21.858278   50715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 20:59:21.872324   50715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:59:21.893931   50715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:59:22.168469   50715 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-997045" context rescaled to 1 replicas
	I0130 20:59:22.168511   50715 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.50.14 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:59:22.171987   50715 out.go:177] * Verifying Kubernetes components...
	I0130 20:59:19.607888   50967 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.307502456s)
	I0130 20:59:19.607922   50967 crio.go:451] Took 3.307612 seconds to extract the tarball
	I0130 20:59:19.607933   50967 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 20:59:19.651197   50967 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:59:19.733614   50967 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 20:59:19.733639   50967 cache_images.go:84] Images are preloaded, skipping loading
	I0130 20:59:19.733707   50967 ssh_runner.go:195] Run: crio config
	I0130 20:59:19.797044   50967 cni.go:84] Creating CNI manager for "kindnet"
	I0130 20:59:19.797083   50967 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 20:59:19.797108   50967 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.163 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-997045 NodeName:kindnet-997045 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:59:19.797263   50967 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-997045"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:59:19.797367   50967 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kindnet-997045 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:kindnet-997045 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0130 20:59:19.797429   50967 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 20:59:19.808314   50967 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:59:19.808387   50967 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:59:19.817329   50967 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0130 20:59:19.834892   50967 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 20:59:19.851410   50967 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0130 20:59:19.868907   50967 ssh_runner.go:195] Run: grep 192.168.61.163	control-plane.minikube.internal$ /etc/hosts
	I0130 20:59:19.872694   50967 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:59:19.884461   50967 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045 for IP: 192.168.61.163
	I0130 20:59:19.884485   50967 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:59:19.884664   50967 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:59:19.884725   50967 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:59:19.884781   50967 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/client.key
	I0130 20:59:19.884796   50967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/client.crt with IP's: []
	I0130 20:59:19.975202   50967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/client.crt ...
	I0130 20:59:19.975234   50967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/client.crt: {Name:mk1510b7b021fc7d7bc8f81145062dcfe66d0674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:59:19.975408   50967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/client.key ...
	I0130 20:59:19.975435   50967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/client.key: {Name:mk5f62bbb32991de814eda3cd38150d71978f7b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:59:19.975533   50967 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/apiserver.key.98c30f22
	I0130 20:59:19.975550   50967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/apiserver.crt.98c30f22 with IP's: [192.168.61.163 10.96.0.1 127.0.0.1 10.0.0.1]
	I0130 20:59:20.156933   50967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/apiserver.crt.98c30f22 ...
	I0130 20:59:20.156957   50967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/apiserver.crt.98c30f22: {Name:mk3f7d016fb5037025d0d89e9fb054220d91fff6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:59:20.157120   50967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/apiserver.key.98c30f22 ...
	I0130 20:59:20.157139   50967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/apiserver.key.98c30f22: {Name:mk927ffad38975bbe29b8a3eace678a9eb84245b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:59:20.157255   50967 certs.go:337] copying /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/apiserver.crt.98c30f22 -> /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/apiserver.crt
	I0130 20:59:20.157376   50967 certs.go:341] copying /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/apiserver.key.98c30f22 -> /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/apiserver.key
	I0130 20:59:20.157460   50967 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/proxy-client.key
	I0130 20:59:20.157475   50967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/proxy-client.crt with IP's: []
	I0130 20:59:20.318778   50967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/proxy-client.crt ...
	I0130 20:59:20.318809   50967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/proxy-client.crt: {Name:mk14699742d1a16f9afd60df83acf5ed18664c68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:59:20.318995   50967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/proxy-client.key ...
	I0130 20:59:20.319013   50967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/proxy-client.key: {Name:mk6e7d28bb59edb154e271360cff14cd378bc0bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:59:20.319243   50967 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:59:20.319311   50967 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:59:20.319327   50967 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:59:20.319366   50967 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:59:20.319403   50967 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:59:20.319443   50967 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:59:20.319515   50967 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:59:20.320339   50967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:59:20.345205   50967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 20:59:20.368797   50967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:59:20.393320   50967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/kindnet-997045/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 20:59:20.420454   50967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:59:20.444771   50967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:59:20.468787   50967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:59:20.492673   50967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:59:20.517660   50967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:59:20.543636   50967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:59:20.566437   50967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:59:20.590614   50967 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:59:20.607824   50967 ssh_runner.go:195] Run: openssl version
	I0130 20:59:20.613687   50967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:59:20.625922   50967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:59:20.630909   50967 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:59:20.630966   50967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:59:20.636860   50967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:59:20.648217   50967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:59:20.660456   50967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:59:20.665966   50967 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:59:20.666029   50967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:59:20.672036   50967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:59:20.685379   50967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:59:20.697606   50967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:59:20.702533   50967 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:59:20.702591   50967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:59:20.708423   50967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:59:20.720237   50967 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:59:20.724500   50967 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0130 20:59:20.724558   50967 kubeadm.go:404] StartCluster: {Name:kindnet-997045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:kindnet-997045 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:59:20.724638   50967 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:59:20.724698   50967 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:59:20.775788   50967 cri.go:89] found id: ""
	I0130 20:59:20.775861   50967 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:59:20.786417   50967 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:59:20.796603   50967 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:59:20.806059   50967 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:59:20.806100   50967 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 20:59:21.015679   50967 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 20:59:22.173946   50715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:59:23.892567   50715 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.034209773s)
	I0130 20:59:23.892601   50715 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0130 20:59:24.244495   50715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.372127024s)
	I0130 20:59:24.244551   50715 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.070565941s)
	I0130 20:59:24.244563   50715 main.go:141] libmachine: Making call to close driver server
	I0130 20:59:24.244576   50715 main.go:141] libmachine: (auto-997045) Calling .Close
	I0130 20:59:24.244506   50715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.350539321s)
	I0130 20:59:24.244649   50715 main.go:141] libmachine: Making call to close driver server
	I0130 20:59:24.244666   50715 main.go:141] libmachine: (auto-997045) Calling .Close
	I0130 20:59:24.245036   50715 main.go:141] libmachine: (auto-997045) DBG | Closing plugin on server side
	I0130 20:59:24.245049   50715 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:59:24.245064   50715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:59:24.245074   50715 main.go:141] libmachine: Making call to close driver server
	I0130 20:59:24.245075   50715 main.go:141] libmachine: (auto-997045) DBG | Closing plugin on server side
	I0130 20:59:24.245082   50715 main.go:141] libmachine: (auto-997045) Calling .Close
	I0130 20:59:24.245105   50715 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:59:24.245114   50715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:59:24.245126   50715 main.go:141] libmachine: Making call to close driver server
	I0130 20:59:24.245135   50715 main.go:141] libmachine: (auto-997045) Calling .Close
	I0130 20:59:24.245314   50715 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:59:24.245331   50715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:59:24.245448   50715 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:59:24.245478   50715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:59:24.247662   50715 node_ready.go:35] waiting up to 15m0s for node "auto-997045" to be "Ready" ...
	I0130 20:59:24.256584   50715 node_ready.go:49] node "auto-997045" has status "Ready":"True"
	I0130 20:59:24.256619   50715 node_ready.go:38] duration metric: took 8.92142ms waiting for node "auto-997045" to be "Ready" ...
	I0130 20:59:24.256632   50715 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:59:24.264997   50715 main.go:141] libmachine: Making call to close driver server
	I0130 20:59:24.265040   50715 main.go:141] libmachine: (auto-997045) Calling .Close
	I0130 20:59:24.271512   50715 main.go:141] libmachine: (auto-997045) DBG | Closing plugin on server side
	I0130 20:59:24.271543   50715 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:59:24.271592   50715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:59:24.275352   50715 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0130 20:59:24.280519   50715 addons.go:505] enable addons completed in 2.638873784s: enabled=[storage-provisioner default-storageclass]
	I0130 20:59:24.280481   50715 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace to be "Ready" ...
	I0130 20:59:20.921428   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:20.921950   51620 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:59:20.921974   51620 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:59:20.921898   51683 retry.go:31] will retry after 1.656276681s: waiting for machine to come up
	I0130 20:59:22.580569   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:22.581247   51620 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:59:22.581271   51620 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:59:22.581172   51683 retry.go:31] will retry after 1.766043806s: waiting for machine to come up
	I0130 20:59:24.349343   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:24.349871   51620 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:59:24.349902   51620 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:59:24.349819   51683 retry.go:31] will retry after 3.568707924s: waiting for machine to come up
	I0130 20:59:26.293170   50715 pod_ready.go:102] pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace has status "Ready":"False"
	I0130 20:59:28.789183   50715 pod_ready.go:102] pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace has status "Ready":"False"
	I0130 20:59:27.919932   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:27.920436   51620 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:59:27.920468   51620 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:59:27.920377   51683 retry.go:31] will retry after 2.831976822s: waiting for machine to come up
	I0130 20:59:34.381780   50967 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0130 20:59:34.381869   50967 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 20:59:34.381975   50967 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 20:59:34.382093   50967 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 20:59:34.382230   50967 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 20:59:34.382345   50967 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 20:59:34.383961   50967 out.go:204]   - Generating certificates and keys ...
	I0130 20:59:34.384055   50967 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 20:59:34.384155   50967 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 20:59:34.384245   50967 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0130 20:59:34.384353   50967 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0130 20:59:34.384451   50967 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0130 20:59:34.384539   50967 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0130 20:59:34.384636   50967 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0130 20:59:34.384795   50967 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kindnet-997045 localhost] and IPs [192.168.61.163 127.0.0.1 ::1]
	I0130 20:59:34.384868   50967 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0130 20:59:34.385038   50967 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kindnet-997045 localhost] and IPs [192.168.61.163 127.0.0.1 ::1]
	I0130 20:59:34.385130   50967 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0130 20:59:34.385225   50967 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0130 20:59:34.385279   50967 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0130 20:59:34.385340   50967 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 20:59:34.385413   50967 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 20:59:34.385480   50967 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 20:59:34.385572   50967 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 20:59:34.385646   50967 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 20:59:34.385761   50967 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 20:59:34.385858   50967 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 20:59:34.387342   50967 out.go:204]   - Booting up control plane ...
	I0130 20:59:34.387446   50967 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 20:59:34.387555   50967 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 20:59:34.387641   50967 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 20:59:34.387765   50967 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 20:59:34.387874   50967 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 20:59:34.387914   50967 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 20:59:34.388099   50967 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 20:59:34.388184   50967 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.505992 seconds
	I0130 20:59:34.388265   50967 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 20:59:34.388361   50967 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 20:59:34.388406   50967 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 20:59:34.388546   50967 kubeadm.go:322] [mark-control-plane] Marking the node kindnet-997045 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 20:59:34.388592   50967 kubeadm.go:322] [bootstrap-token] Using token: 6ziwd9.zqur6zcdf0lox3kt
	I0130 20:59:34.389934   50967 out.go:204]   - Configuring RBAC rules ...
	I0130 20:59:34.390049   50967 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 20:59:34.390158   50967 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 20:59:34.390354   50967 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 20:59:34.390545   50967 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 20:59:34.390705   50967 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 20:59:34.390828   50967 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 20:59:34.390946   50967 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 20:59:34.391005   50967 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 20:59:34.391063   50967 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 20:59:34.391073   50967 kubeadm.go:322] 
	I0130 20:59:34.391146   50967 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 20:59:34.391156   50967 kubeadm.go:322] 
	I0130 20:59:34.391288   50967 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 20:59:34.391304   50967 kubeadm.go:322] 
	I0130 20:59:34.391343   50967 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 20:59:34.391425   50967 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 20:59:34.391514   50967 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 20:59:34.391531   50967 kubeadm.go:322] 
	I0130 20:59:34.391598   50967 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 20:59:34.391607   50967 kubeadm.go:322] 
	I0130 20:59:34.391678   50967 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 20:59:34.391693   50967 kubeadm.go:322] 
	I0130 20:59:34.391768   50967 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 20:59:34.391870   50967 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 20:59:34.391962   50967 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 20:59:34.391972   50967 kubeadm.go:322] 
	I0130 20:59:34.392074   50967 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 20:59:34.392194   50967 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 20:59:34.392212   50967 kubeadm.go:322] 
	I0130 20:59:34.392338   50967 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 6ziwd9.zqur6zcdf0lox3kt \
	I0130 20:59:34.392481   50967 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 \
	I0130 20:59:34.392511   50967 kubeadm.go:322] 	--control-plane 
	I0130 20:59:34.392520   50967 kubeadm.go:322] 
	I0130 20:59:34.392621   50967 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 20:59:34.392639   50967 kubeadm.go:322] 
	I0130 20:59:34.392728   50967 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 6ziwd9.zqur6zcdf0lox3kt \
	I0130 20:59:34.392853   50967 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ff7470c9a5eccdb7aced97c9737b31696422b458d74780823ca6a7796da43ee3 
	I0130 20:59:34.392869   50967 cni.go:84] Creating CNI manager for "kindnet"
	I0130 20:59:34.394557   50967 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0130 20:59:31.298010   50715 pod_ready.go:102] pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace has status "Ready":"False"
	I0130 20:59:33.788045   50715 pod_ready.go:102] pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace has status "Ready":"False"
	I0130 20:59:30.753642   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:30.754198   51620 main.go:141] libmachine: (newest-cni-564644) DBG | unable to find current IP address of domain newest-cni-564644 in network mk-newest-cni-564644
	I0130 20:59:30.754231   51620 main.go:141] libmachine: (newest-cni-564644) DBG | I0130 20:59:30.754133   51683 retry.go:31] will retry after 4.765827231s: waiting for machine to come up
	I0130 20:59:34.395906   50967 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0130 20:59:34.403285   50967 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0130 20:59:34.403305   50967 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0130 20:59:34.440786   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0130 20:59:35.480761   50967 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.039936877s)
	I0130 20:59:35.480810   50967 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 20:59:35.480908   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:35.480923   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218 minikube.k8s.io/name=kindnet-997045 minikube.k8s.io/updated_at=2024_01_30T20_59_35_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:35.504788   50967 ops.go:34] apiserver oom_adj: -16
	I0130 20:59:35.658377   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:36.159252   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:36.659418   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:37.159146   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:35.522018   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:35.522601   51620 main.go:141] libmachine: (newest-cni-564644) Found IP for machine: 192.168.39.184
	I0130 20:59:35.522638   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has current primary IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:35.522649   51620 main.go:141] libmachine: (newest-cni-564644) Reserving static IP address...
	I0130 20:59:35.523194   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "newest-cni-564644", mac: "52:54:00:b2:d5:bf", ip: "192.168.39.184"} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 20:59:35.523242   51620 main.go:141] libmachine: (newest-cni-564644) DBG | skip adding static IP to network mk-newest-cni-564644 - found existing host DHCP lease matching {name: "newest-cni-564644", mac: "52:54:00:b2:d5:bf", ip: "192.168.39.184"}
	I0130 20:59:35.523259   51620 main.go:141] libmachine: (newest-cni-564644) Reserved static IP address: 192.168.39.184
	I0130 20:59:35.523279   51620 main.go:141] libmachine: (newest-cni-564644) DBG | Getting to WaitForSSH function...
	I0130 20:59:35.523298   51620 main.go:141] libmachine: (newest-cni-564644) Waiting for SSH to be available...
	I0130 20:59:35.525902   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:35.526311   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:d5:bf", ip: ""} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 20:59:35.526345   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:35.526492   51620 main.go:141] libmachine: (newest-cni-564644) DBG | Using SSH client type: external
	I0130 20:59:35.526521   51620 main.go:141] libmachine: (newest-cni-564644) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/newest-cni-564644/id_rsa (-rw-------)
	I0130 20:59:35.526574   51620 main.go:141] libmachine: (newest-cni-564644) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4458/.minikube/machines/newest-cni-564644/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 20:59:35.526590   51620 main.go:141] libmachine: (newest-cni-564644) DBG | About to run SSH command:
	I0130 20:59:35.526602   51620 main.go:141] libmachine: (newest-cni-564644) DBG | exit 0
	I0130 20:59:35.659725   51620 main.go:141] libmachine: (newest-cni-564644) DBG | SSH cmd err, output: <nil>: 
	I0130 20:59:35.660100   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetConfigRaw
	I0130 20:59:35.660805   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetIP
	I0130 20:59:35.663638   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:35.664044   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:d5:bf", ip: ""} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 20:59:35.664084   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:35.664353   51620 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/newest-cni-564644/config.json ...
	I0130 20:59:35.664612   51620 machine.go:88] provisioning docker machine ...
	I0130 20:59:35.664636   51620 main.go:141] libmachine: (newest-cni-564644) Calling .DriverName
	I0130 20:59:35.664837   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetMachineName
	I0130 20:59:35.664983   51620 buildroot.go:166] provisioning hostname "newest-cni-564644"
	I0130 20:59:35.665002   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetMachineName
	I0130 20:59:35.665165   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHHostname
	I0130 20:59:35.667986   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:35.668457   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:d5:bf", ip: ""} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 20:59:35.668497   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:35.668628   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHPort
	I0130 20:59:35.668812   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHKeyPath
	I0130 20:59:35.668974   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHKeyPath
	I0130 20:59:35.669120   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHUsername
	I0130 20:59:35.669328   51620 main.go:141] libmachine: Using SSH client type: native
	I0130 20:59:35.669751   51620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0130 20:59:35.669768   51620 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-564644 && echo "newest-cni-564644" | sudo tee /etc/hostname
	I0130 20:59:35.805466   51620 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-564644
	
	I0130 20:59:35.805493   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHHostname
	I0130 20:59:35.808528   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:35.808880   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:d5:bf", ip: ""} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 20:59:35.808911   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:35.809072   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHPort
	I0130 20:59:35.809298   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHKeyPath
	I0130 20:59:35.809465   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHKeyPath
	I0130 20:59:35.809626   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHUsername
	I0130 20:59:35.809808   51620 main.go:141] libmachine: Using SSH client type: native
	I0130 20:59:35.810104   51620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0130 20:59:35.810123   51620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-564644' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-564644/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-564644' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 20:59:35.936751   51620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 20:59:35.936785   51620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4458/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4458/.minikube}
	I0130 20:59:35.936806   51620 buildroot.go:174] setting up certificates
	I0130 20:59:35.936816   51620 provision.go:83] configureAuth start
	I0130 20:59:35.936828   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetMachineName
	I0130 20:59:35.937132   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetIP
	I0130 20:59:35.939723   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:35.940080   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:d5:bf", ip: ""} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 20:59:35.940102   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:35.940209   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHHostname
	I0130 20:59:35.942744   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:35.943106   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:d5:bf", ip: ""} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 20:59:35.943140   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:35.943313   51620 provision.go:138] copyHostCerts
	I0130 20:59:35.943383   51620 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem, removing ...
	I0130 20:59:35.943394   51620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem
	I0130 20:59:35.943472   51620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/key.pem (1679 bytes)
	I0130 20:59:35.943598   51620 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem, removing ...
	I0130 20:59:35.943609   51620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem
	I0130 20:59:35.943636   51620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/ca.pem (1082 bytes)
	I0130 20:59:35.943702   51620 exec_runner.go:144] found /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem, removing ...
	I0130 20:59:35.943708   51620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem
	I0130 20:59:35.943727   51620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4458/.minikube/cert.pem (1123 bytes)
	I0130 20:59:35.943784   51620 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem org=jenkins.newest-cni-564644 san=[192.168.39.184 192.168.39.184 localhost 127.0.0.1 minikube newest-cni-564644]
	I0130 20:59:36.090940   51620 provision.go:172] copyRemoteCerts
	I0130 20:59:36.090998   51620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 20:59:36.091034   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHHostname
	I0130 20:59:36.093916   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:36.094271   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:d5:bf", ip: ""} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 20:59:36.094314   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:36.094467   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHPort
	I0130 20:59:36.094671   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHKeyPath
	I0130 20:59:36.094834   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHUsername
	I0130 20:59:36.094988   51620 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/newest-cni-564644/id_rsa Username:docker}
	I0130 20:59:36.180635   51620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0130 20:59:36.207234   51620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 20:59:36.231934   51620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 20:59:36.257852   51620 provision.go:86] duration metric: configureAuth took 321.02266ms
	I0130 20:59:36.257891   51620 buildroot.go:189] setting minikube options for container-runtime
	I0130 20:59:36.258125   51620 config.go:182] Loaded profile config "newest-cni-564644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 20:59:36.258224   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHHostname
	I0130 20:59:36.261132   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:36.261631   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:d5:bf", ip: ""} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 20:59:36.261668   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:36.261934   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHPort
	I0130 20:59:36.262123   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHKeyPath
	I0130 20:59:36.262301   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHKeyPath
	I0130 20:59:36.262449   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHUsername
	I0130 20:59:36.262586   51620 main.go:141] libmachine: Using SSH client type: native
	I0130 20:59:36.263041   51620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0130 20:59:36.263071   51620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 20:59:36.587171   51620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 20:59:36.587200   51620 machine.go:91] provisioned docker machine in 922.570487ms
	I0130 20:59:36.587212   51620 start.go:300] post-start starting for "newest-cni-564644" (driver="kvm2")
	I0130 20:59:36.587224   51620 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 20:59:36.587238   51620 main.go:141] libmachine: (newest-cni-564644) Calling .DriverName
	I0130 20:59:36.587563   51620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 20:59:36.587593   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHHostname
	I0130 20:59:36.590614   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:36.590980   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:d5:bf", ip: ""} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 20:59:36.591020   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:36.591179   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHPort
	I0130 20:59:36.591400   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHKeyPath
	I0130 20:59:36.591560   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHUsername
	I0130 20:59:36.591777   51620 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/newest-cni-564644/id_rsa Username:docker}
	I0130 20:59:36.685819   51620 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 20:59:36.690222   51620 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 20:59:36.690257   51620 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/addons for local assets ...
	I0130 20:59:36.690341   51620 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4458/.minikube/files for local assets ...
	I0130 20:59:36.690410   51620 filesync.go:149] local asset: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem -> 116672.pem in /etc/ssl/certs
	I0130 20:59:36.690486   51620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 20:59:36.700244   51620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:59:36.726609   51620 start.go:303] post-start completed in 139.383405ms
	I0130 20:59:36.726638   51620 fix.go:56] fixHost completed within 23.858451759s
	I0130 20:59:36.726663   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHHostname
	I0130 20:59:36.729666   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:36.730014   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:d5:bf", ip: ""} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 20:59:36.730039   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:36.730250   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHPort
	I0130 20:59:36.730462   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHKeyPath
	I0130 20:59:36.730634   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHKeyPath
	I0130 20:59:36.730898   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHUsername
	I0130 20:59:36.731097   51620 main.go:141] libmachine: Using SSH client type: native
	I0130 20:59:36.731514   51620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0130 20:59:36.731528   51620 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 20:59:36.848401   51620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706648376.792980401
	
	I0130 20:59:36.848422   51620 fix.go:206] guest clock: 1706648376.792980401
	I0130 20:59:36.848431   51620 fix.go:219] Guest: 2024-01-30 20:59:36.792980401 +0000 UTC Remote: 2024-01-30 20:59:36.726642124 +0000 UTC m=+26.682402403 (delta=66.338277ms)
	I0130 20:59:36.848477   51620 fix.go:190] guest clock delta is within tolerance: 66.338277ms
	I0130 20:59:36.848487   51620 start.go:83] releasing machines lock for "newest-cni-564644", held for 23.980329191s
	I0130 20:59:36.848516   51620 main.go:141] libmachine: (newest-cni-564644) Calling .DriverName
	I0130 20:59:36.848800   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetIP
	I0130 20:59:36.851708   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:36.852120   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:d5:bf", ip: ""} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 20:59:36.852152   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:36.852447   51620 main.go:141] libmachine: (newest-cni-564644) Calling .DriverName
	I0130 20:59:36.853006   51620 main.go:141] libmachine: (newest-cni-564644) Calling .DriverName
	I0130 20:59:36.853219   51620 main.go:141] libmachine: (newest-cni-564644) Calling .DriverName
	I0130 20:59:36.853320   51620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 20:59:36.853372   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHHostname
	I0130 20:59:36.853676   51620 ssh_runner.go:195] Run: cat /version.json
	I0130 20:59:36.853703   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHHostname
	I0130 20:59:36.856643   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:36.857076   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:d5:bf", ip: ""} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 20:59:36.857125   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:36.857184   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:36.857285   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHPort
	I0130 20:59:36.857473   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHKeyPath
	I0130 20:59:36.857563   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:d5:bf", ip: ""} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 20:59:36.857590   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:36.857644   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHUsername
	I0130 20:59:36.857731   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHPort
	I0130 20:59:36.857823   51620 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/newest-cni-564644/id_rsa Username:docker}
	I0130 20:59:36.857888   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHKeyPath
	I0130 20:59:36.858039   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHUsername
	I0130 20:59:36.858207   51620 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/newest-cni-564644/id_rsa Username:docker}
	I0130 20:59:36.941000   51620 ssh_runner.go:195] Run: systemctl --version
	I0130 20:59:36.972836   51620 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 20:59:37.115316   51620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 20:59:37.121841   51620 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 20:59:37.121921   51620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 20:59:37.138287   51620 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 20:59:37.138312   51620 start.go:475] detecting cgroup driver to use...
	I0130 20:59:37.138377   51620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 20:59:37.152890   51620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 20:59:37.168591   51620 docker.go:217] disabling cri-docker service (if available) ...
	I0130 20:59:37.168650   51620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 20:59:37.183749   51620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 20:59:37.196879   51620 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 20:59:37.315345   51620 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 20:59:37.440754   51620 docker.go:233] disabling docker service ...
	I0130 20:59:37.440819   51620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 20:59:37.456978   51620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 20:59:37.468502   51620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 20:59:37.575679   51620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 20:59:37.690131   51620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 20:59:37.706718   51620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 20:59:37.725818   51620 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 20:59:37.725885   51620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:59:37.735811   51620 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 20:59:37.735880   51620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:59:37.744983   51620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:59:37.753868   51620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 20:59:37.765437   51620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 20:59:37.775702   51620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 20:59:37.784739   51620 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 20:59:37.784813   51620 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 20:59:37.798797   51620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 20:59:37.809194   51620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 20:59:37.913480   51620 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 20:59:38.095858   51620 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 20:59:38.095927   51620 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 20:59:38.101047   51620 start.go:543] Will wait 60s for crictl version
	I0130 20:59:38.101102   51620 ssh_runner.go:195] Run: which crictl
	I0130 20:59:38.105304   51620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 20:59:38.144642   51620 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 20:59:38.144728   51620 ssh_runner.go:195] Run: crio --version
	I0130 20:59:38.193205   51620 ssh_runner.go:195] Run: crio --version
	I0130 20:59:38.254657   51620 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0130 20:59:38.256021   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetIP
	I0130 20:59:38.258760   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:38.259072   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:d5:bf", ip: ""} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 20:59:38.259102   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 20:59:38.259320   51620 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 20:59:38.263641   51620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:59:38.277527   51620 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0130 20:59:35.788692   50715 pod_ready.go:102] pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace has status "Ready":"False"
	I0130 20:59:37.788965   50715 pod_ready.go:102] pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace has status "Ready":"False"
	I0130 20:59:38.278783   51620 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 20:59:38.278852   51620 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:59:38.320208   51620 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0130 20:59:38.320270   51620 ssh_runner.go:195] Run: which lz4
	I0130 20:59:38.324458   51620 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 20:59:38.329000   51620 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 20:59:38.329039   51620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0130 20:59:39.944307   51620 crio.go:444] Took 1.619910 seconds to copy over tarball
	I0130 20:59:39.944382   51620 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 20:59:37.658715   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:38.158601   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:38.658748   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:39.158536   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:39.658729   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:40.159154   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:40.658484   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:41.159208   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:41.658663   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:42.159040   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:39.792882   50715 pod_ready.go:102] pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace has status "Ready":"False"
	I0130 20:59:42.288426   50715 pod_ready.go:102] pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace has status "Ready":"False"
	I0130 20:59:42.845374   51620 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.900961325s)
	I0130 20:59:42.845404   51620 crio.go:451] Took 2.901071 seconds to extract the tarball
	I0130 20:59:42.845413   51620 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 20:59:42.883017   51620 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 20:59:42.936750   51620 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 20:59:42.936777   51620 cache_images.go:84] Images are preloaded, skipping loading
	I0130 20:59:42.936859   51620 ssh_runner.go:195] Run: crio config
	I0130 20:59:42.999087   51620 cni.go:84] Creating CNI manager for ""
	I0130 20:59:42.999109   51620 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 20:59:42.999128   51620 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0130 20:59:42.999146   51620 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.184 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-564644 NodeName:newest-cni-564644 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.39.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 20:59:42.999294   51620 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-564644"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 20:59:42.999364   51620 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-564644 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-564644 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 20:59:42.999425   51620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0130 20:59:43.009579   51620 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 20:59:43.009657   51620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 20:59:43.019106   51620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I0130 20:59:43.036479   51620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0130 20:59:43.053383   51620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I0130 20:59:43.070228   51620 ssh_runner.go:195] Run: grep 192.168.39.184	control-plane.minikube.internal$ /etc/hosts
	I0130 20:59:43.074442   51620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 20:59:43.087534   51620 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/newest-cni-564644 for IP: 192.168.39.184
	I0130 20:59:43.087571   51620 certs.go:190] acquiring lock for shared ca certs: {Name:mk24fe9183ec6d840f0a61478f0d9e80422d6be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:59:43.087719   51620 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key
	I0130 20:59:43.087761   51620 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key
	I0130 20:59:43.087819   51620 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/newest-cni-564644/client.key
	I0130 20:59:43.087872   51620 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/newest-cni-564644/apiserver.key.aff394ec
	I0130 20:59:43.087908   51620 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/newest-cni-564644/proxy-client.key
	I0130 20:59:43.088012   51620 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem (1338 bytes)
	W0130 20:59:43.088050   51620 certs.go:433] ignoring /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667_empty.pem, impossibly tiny 0 bytes
	I0130 20:59:43.088061   51620 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 20:59:43.088106   51620 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/ca.pem (1082 bytes)
	I0130 20:59:43.088142   51620 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/cert.pem (1123 bytes)
	I0130 20:59:43.088197   51620 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/certs/home/jenkins/minikube-integration/18007-4458/.minikube/certs/key.pem (1679 bytes)
	I0130 20:59:43.088266   51620 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem (1708 bytes)
	I0130 20:59:43.088882   51620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/newest-cni-564644/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 20:59:43.112787   51620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/newest-cni-564644/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 20:59:43.136223   51620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/newest-cni-564644/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 20:59:43.159094   51620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/newest-cni-564644/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 20:59:43.187163   51620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 20:59:43.216012   51620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0130 20:59:43.242787   51620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 20:59:43.267056   51620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0130 20:59:43.292764   51620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/certs/11667.pem --> /usr/share/ca-certificates/11667.pem (1338 bytes)
	I0130 20:59:43.316167   51620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/ssl/certs/116672.pem --> /usr/share/ca-certificates/116672.pem (1708 bytes)
	I0130 20:59:43.341131   51620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 20:59:43.364679   51620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 20:59:43.384551   51620 ssh_runner.go:195] Run: openssl version
	I0130 20:59:43.392793   51620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 20:59:43.404396   51620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:59:43.410090   51620 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:59:43.410162   51620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 20:59:43.417163   51620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 20:59:43.427788   51620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667.pem && ln -fs /usr/share/ca-certificates/11667.pem /etc/ssl/certs/11667.pem"
	I0130 20:59:43.438493   51620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667.pem
	I0130 20:59:43.443295   51620 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 19:35 /usr/share/ca-certificates/11667.pem
	I0130 20:59:43.443375   51620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667.pem
	I0130 20:59:43.449173   51620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11667.pem /etc/ssl/certs/51391683.0"
	I0130 20:59:43.459840   51620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116672.pem && ln -fs /usr/share/ca-certificates/116672.pem /etc/ssl/certs/116672.pem"
	I0130 20:59:43.470198   51620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116672.pem
	I0130 20:59:43.475777   51620 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 19:35 /usr/share/ca-certificates/116672.pem
	I0130 20:59:43.475840   51620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116672.pem
	I0130 20:59:43.481790   51620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116672.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 20:59:43.493597   51620 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 20:59:43.498803   51620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 20:59:43.505067   51620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 20:59:43.511565   51620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 20:59:43.518218   51620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 20:59:43.526100   51620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 20:59:43.532695   51620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 20:59:43.539158   51620 kubeadm.go:404] StartCluster: {Name:newest-cni-564644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-564644 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false syste
m_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 20:59:43.539255   51620 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 20:59:43.539329   51620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:59:43.580275   51620 cri.go:89] found id: ""
	I0130 20:59:43.580369   51620 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 20:59:43.589637   51620 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 20:59:43.589659   51620 kubeadm.go:636] restartCluster start
	I0130 20:59:43.589703   51620 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 20:59:43.598662   51620 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:43.599517   51620 kubeconfig.go:135] verify returned: extract IP: "newest-cni-564644" does not appear in /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:59:43.600030   51620 kubeconfig.go:146] "newest-cni-564644" context is missing from /home/jenkins/minikube-integration/18007-4458/kubeconfig - will repair!
	I0130 20:59:43.601184   51620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:59:43.675869   51620 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 20:59:43.685548   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:43.685611   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:43.701204   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:44.185719   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:44.185793   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:44.197551   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:44.686203   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:44.686323   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:44.698928   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:42.658999   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:43.158544   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:43.658959   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:44.159343   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:44.658881   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:45.159325   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:45.658499   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:46.159434   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:46.658454   50967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 20:59:46.763521   50967 kubeadm.go:1088] duration metric: took 11.282683793s to wait for elevateKubeSystemPrivileges.
	I0130 20:59:46.763548   50967 kubeadm.go:406] StartCluster complete in 26.03899804s
	I0130 20:59:46.763563   50967 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:59:46.763636   50967 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:59:46.765157   50967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 20:59:46.765384   50967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 20:59:46.765487   50967 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 20:59:46.765568   50967 addons.go:69] Setting storage-provisioner=true in profile "kindnet-997045"
	I0130 20:59:46.765580   50967 addons.go:69] Setting default-storageclass=true in profile "kindnet-997045"
	I0130 20:59:46.765623   50967 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-997045"
	I0130 20:59:46.765627   50967 config.go:182] Loaded profile config "kindnet-997045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:59:46.765587   50967 addons.go:234] Setting addon storage-provisioner=true in "kindnet-997045"
	I0130 20:59:46.765774   50967 host.go:66] Checking if "kindnet-997045" exists ...
	I0130 20:59:46.766141   50967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:59:46.766163   50967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:59:46.766172   50967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:59:46.766193   50967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:59:46.781999   50967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I0130 20:59:46.782405   50967 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:59:46.782962   50967 main.go:141] libmachine: Using API Version  1
	I0130 20:59:46.782988   50967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:59:46.783401   50967 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:59:46.785064   50967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0130 20:59:46.785636   50967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:59:46.785675   50967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:59:46.786334   50967 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:59:46.786843   50967 main.go:141] libmachine: Using API Version  1
	I0130 20:59:46.786866   50967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:59:46.787747   50967 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:59:46.788440   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetState
	I0130 20:59:46.791879   50967 addons.go:234] Setting addon default-storageclass=true in "kindnet-997045"
	I0130 20:59:46.791921   50967 host.go:66] Checking if "kindnet-997045" exists ...
	I0130 20:59:46.792354   50967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:59:46.792401   50967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:59:46.802070   50967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44071
	I0130 20:59:46.802498   50967 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:59:46.803144   50967 main.go:141] libmachine: Using API Version  1
	I0130 20:59:46.803168   50967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:59:46.803576   50967 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:59:46.803764   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetState
	I0130 20:59:46.805490   50967 main.go:141] libmachine: (kindnet-997045) Calling .DriverName
	I0130 20:59:46.807440   50967 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 20:59:46.808791   50967 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:59:46.808810   50967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 20:59:46.808829   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHHostname
	I0130 20:59:46.810146   50967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41125
	I0130 20:59:46.810586   50967 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:59:46.811153   50967 main.go:141] libmachine: Using API Version  1
	I0130 20:59:46.811183   50967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:59:46.811555   50967 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:59:46.812108   50967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 20:59:46.812137   50967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 20:59:46.812358   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:46.812481   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:kindnet-997045 Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:46.812510   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:46.812745   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHPort
	I0130 20:59:46.812933   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHKeyPath
	I0130 20:59:46.813094   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHUsername
	I0130 20:59:46.813200   50967 sshutil.go:53] new ssh client: &{IP:192.168.61.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/kindnet-997045/id_rsa Username:docker}
	I0130 20:59:46.829916   50967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42437
	I0130 20:59:46.830422   50967 main.go:141] libmachine: () Calling .GetVersion
	I0130 20:59:46.830937   50967 main.go:141] libmachine: Using API Version  1
	I0130 20:59:46.830965   50967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 20:59:46.831383   50967 main.go:141] libmachine: () Calling .GetMachineName
	I0130 20:59:46.831598   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetState
	I0130 20:59:46.833248   50967 main.go:141] libmachine: (kindnet-997045) Calling .DriverName
	I0130 20:59:46.833515   50967 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 20:59:46.833530   50967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 20:59:46.833543   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHHostname
	I0130 20:59:46.837060   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:46.837097   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHPort
	I0130 20:59:46.837098   50967 main.go:141] libmachine: (kindnet-997045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:7e:1a", ip: ""} in network mk-kindnet-997045: {Iface:virbr3 ExpiryTime:2024-01-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:e3:7e:1a Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:kindnet-997045 Clientid:01:52:54:00:e3:7e:1a}
	I0130 20:59:46.837124   50967 main.go:141] libmachine: (kindnet-997045) DBG | domain kindnet-997045 has defined IP address 192.168.61.163 and MAC address 52:54:00:e3:7e:1a in network mk-kindnet-997045
	I0130 20:59:46.837261   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHKeyPath
	I0130 20:59:46.837370   50967 main.go:141] libmachine: (kindnet-997045) Calling .GetSSHUsername
	I0130 20:59:46.837504   50967 sshutil.go:53] new ssh client: &{IP:192.168.61.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/kindnet-997045/id_rsa Username:docker}
	I0130 20:59:46.962189   50967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 20:59:46.979836   50967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 20:59:46.985216   50967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 20:59:47.337069   50967 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-997045" context rescaled to 1 replicas
	I0130 20:59:47.337115   50967 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.61.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 20:59:47.338939   50967 out.go:177] * Verifying Kubernetes components...
	I0130 20:59:47.340455   50967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:59:47.869699   50967 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0130 20:59:48.129613   50967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.149732502s)
	I0130 20:59:48.129642   50967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.144403087s)
	I0130 20:59:48.129665   50967 main.go:141] libmachine: Making call to close driver server
	I0130 20:59:48.129683   50967 main.go:141] libmachine: (kindnet-997045) Calling .Close
	I0130 20:59:48.129665   50967 main.go:141] libmachine: Making call to close driver server
	I0130 20:59:48.129752   50967 main.go:141] libmachine: (kindnet-997045) Calling .Close
	I0130 20:59:48.130148   50967 main.go:141] libmachine: (kindnet-997045) DBG | Closing plugin on server side
	I0130 20:59:48.130173   50967 main.go:141] libmachine: (kindnet-997045) DBG | Closing plugin on server side
	I0130 20:59:48.130195   50967 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:59:48.130204   50967 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:59:48.130220   50967 main.go:141] libmachine: Making call to close driver server
	I0130 20:59:48.130229   50967 main.go:141] libmachine: (kindnet-997045) Calling .Close
	I0130 20:59:48.130564   50967 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:59:48.130603   50967 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:59:48.131301   50967 node_ready.go:35] waiting up to 15m0s for node "kindnet-997045" to be "Ready" ...
	I0130 20:59:48.131661   50967 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:59:48.131677   50967 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:59:48.131687   50967 main.go:141] libmachine: Making call to close driver server
	I0130 20:59:48.131696   50967 main.go:141] libmachine: (kindnet-997045) Calling .Close
	I0130 20:59:48.131957   50967 main.go:141] libmachine: (kindnet-997045) DBG | Closing plugin on server side
	I0130 20:59:48.131985   50967 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:59:48.132004   50967 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:59:48.143976   50967 main.go:141] libmachine: Making call to close driver server
	I0130 20:59:48.144000   50967 main.go:141] libmachine: (kindnet-997045) Calling .Close
	I0130 20:59:48.144237   50967 main.go:141] libmachine: Successfully made call to close driver server
	I0130 20:59:48.144253   50967 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 20:59:48.145810   50967 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0130 20:59:44.729443   50715 pod_ready.go:102] pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace has status "Ready":"False"
	I0130 20:59:46.790070   50715 pod_ready.go:102] pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace has status "Ready":"False"
	I0130 20:59:48.791577   50715 pod_ready.go:102] pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace has status "Ready":"False"
	I0130 20:59:45.186587   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:45.335223   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:45.347374   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:45.685752   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:45.686393   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:45.698547   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:46.185766   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:46.185849   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:46.198299   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:46.685860   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:46.685966   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:46.699830   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:47.186380   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:47.186481   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:47.198322   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:47.685839   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:47.685923   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:47.697795   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:48.185587   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:48.185689   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:48.197094   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:48.686571   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:48.686652   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:48.698215   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:49.186258   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:49.186345   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:49.197995   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:49.686520   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:49.686619   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:49.698969   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:48.147050   50967 addons.go:505] enable addons completed in 1.381571758s: enabled=[storage-provisioner default-storageclass]
	I0130 20:59:50.135915   50967 node_ready.go:58] node "kindnet-997045" has status "Ready":"False"
	I0130 20:59:52.136171   50967 node_ready.go:58] node "kindnet-997045" has status "Ready":"False"
	I0130 20:59:51.289873   50715 pod_ready.go:102] pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace has status "Ready":"False"
	I0130 20:59:53.790729   50715 pod_ready.go:102] pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace has status "Ready":"False"
	I0130 20:59:50.185833   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:50.185946   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:50.201210   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:50.685664   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:50.685742   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:50.697026   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:51.186619   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:51.186737   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:51.201476   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:51.686036   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:51.686115   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:51.697958   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:52.186175   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:52.186275   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:52.199192   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:52.685704   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:52.685807   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:52.700700   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:53.185630   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:53.185719   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:53.197174   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:53.685564   51620 api_server.go:166] Checking apiserver status ...
	I0130 20:59:53.685649   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 20:59:53.697423   51620 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 20:59:53.697454   51620 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 20:59:53.697465   51620 kubeadm.go:1135] stopping kube-system containers ...
	I0130 20:59:53.697478   51620 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 20:59:53.697540   51620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 20:59:53.747539   51620 cri.go:89] found id: ""
	I0130 20:59:53.747609   51620 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 20:59:53.772720   51620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 20:59:53.782449   51620 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 20:59:53.782522   51620 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 20:59:53.793734   51620 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 20:59:53.793771   51620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:59:53.948355   51620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:59:54.963627   51620 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.015231287s)
	I0130 20:59:54.963661   51620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:59:53.135705   50967 node_ready.go:49] node "kindnet-997045" has status "Ready":"True"
	I0130 20:59:53.135728   50967 node_ready.go:38] duration metric: took 5.004393807s waiting for node "kindnet-997045" to be "Ready" ...
	I0130 20:59:53.135736   50967 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:59:53.149171   50967 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-4n7q6" in "kube-system" namespace to be "Ready" ...
	I0130 20:59:55.157795   50967 pod_ready.go:92] pod "coredns-5dd5756b68-4n7q6" in "kube-system" namespace has status "Ready":"True"
	I0130 20:59:55.157816   50967 pod_ready.go:81] duration metric: took 2.00862098s waiting for pod "coredns-5dd5756b68-4n7q6" in "kube-system" namespace to be "Ready" ...
	I0130 20:59:55.157827   50967 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-997045" in "kube-system" namespace to be "Ready" ...
	I0130 20:59:55.166672   50967 pod_ready.go:92] pod "etcd-kindnet-997045" in "kube-system" namespace has status "Ready":"True"
	I0130 20:59:55.166706   50967 pod_ready.go:81] duration metric: took 8.871671ms waiting for pod "etcd-kindnet-997045" in "kube-system" namespace to be "Ready" ...
	I0130 20:59:55.166723   50967 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-997045" in "kube-system" namespace to be "Ready" ...
	I0130 20:59:55.173243   50967 pod_ready.go:92] pod "kube-apiserver-kindnet-997045" in "kube-system" namespace has status "Ready":"True"
	I0130 20:59:55.173274   50967 pod_ready.go:81] duration metric: took 6.541065ms waiting for pod "kube-apiserver-kindnet-997045" in "kube-system" namespace to be "Ready" ...
	I0130 20:59:55.173287   50967 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-997045" in "kube-system" namespace to be "Ready" ...
	I0130 20:59:55.179547   50967 pod_ready.go:92] pod "kube-controller-manager-kindnet-997045" in "kube-system" namespace has status "Ready":"True"
	I0130 20:59:55.179579   50967 pod_ready.go:81] duration metric: took 6.282729ms waiting for pod "kube-controller-manager-kindnet-997045" in "kube-system" namespace to be "Ready" ...
	I0130 20:59:55.179592   50967 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-g2mkf" in "kube-system" namespace to be "Ready" ...
	I0130 20:59:55.185550   50967 pod_ready.go:92] pod "kube-proxy-g2mkf" in "kube-system" namespace has status "Ready":"True"
	I0130 20:59:55.185570   50967 pod_ready.go:81] duration metric: took 5.970807ms waiting for pod "kube-proxy-g2mkf" in "kube-system" namespace to be "Ready" ...
	I0130 20:59:55.185579   50967 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-997045" in "kube-system" namespace to be "Ready" ...
	I0130 20:59:55.554331   50967 pod_ready.go:92] pod "kube-scheduler-kindnet-997045" in "kube-system" namespace has status "Ready":"True"
	I0130 20:59:55.554359   50967 pod_ready.go:81] duration metric: took 368.773529ms waiting for pod "kube-scheduler-kindnet-997045" in "kube-system" namespace to be "Ready" ...
	I0130 20:59:55.554374   50967 pod_ready.go:38] duration metric: took 2.418627615s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 20:59:55.554394   50967 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:59:55.554474   50967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:59:55.572814   50967 api_server.go:72] duration metric: took 8.235661898s to wait for apiserver process to appear ...
	I0130 20:59:55.572836   50967 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:59:55.572854   50967 api_server.go:253] Checking apiserver healthz at https://192.168.61.163:8443/healthz ...
	I0130 20:59:55.578184   50967 api_server.go:279] https://192.168.61.163:8443/healthz returned 200:
	ok
	I0130 20:59:55.579852   50967 api_server.go:141] control plane version: v1.28.4
	I0130 20:59:55.579871   50967 api_server.go:131] duration metric: took 7.028835ms to wait for apiserver health ...
	I0130 20:59:55.579877   50967 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 20:59:55.758318   50967 system_pods.go:59] 8 kube-system pods found
	I0130 20:59:55.758345   50967 system_pods.go:61] "coredns-5dd5756b68-4n7q6" [9b6b7d30-ad8d-476c-b639-8a696d171308] Running
	I0130 20:59:55.758350   50967 system_pods.go:61] "etcd-kindnet-997045" [81295f59-d2fd-479f-8039-389960eb1553] Running
	I0130 20:59:55.758354   50967 system_pods.go:61] "kindnet-lf2vl" [af5c4a07-fc0f-41ad-a4ce-849faf43fb6e] Running
	I0130 20:59:55.758359   50967 system_pods.go:61] "kube-apiserver-kindnet-997045" [233f1694-5d22-48ad-bb1c-a76b3da8a904] Running
	I0130 20:59:55.758364   50967 system_pods.go:61] "kube-controller-manager-kindnet-997045" [c036f4fd-3b6e-45b5-94f1-c63c589c6778] Running
	I0130 20:59:55.758369   50967 system_pods.go:61] "kube-proxy-g2mkf" [4ce600ee-fd06-4c38-b5db-c5a543930d62] Running
	I0130 20:59:55.758373   50967 system_pods.go:61] "kube-scheduler-kindnet-997045" [9164ee8a-b1de-47eb-8113-a95f4e6ce511] Running
	I0130 20:59:55.758376   50967 system_pods.go:61] "storage-provisioner" [72eabce9-418f-4324-9dcc-0d8cb6ff5d92] Running
	I0130 20:59:55.758382   50967 system_pods.go:74] duration metric: took 178.499557ms to wait for pod list to return data ...
	I0130 20:59:55.758389   50967 default_sa.go:34] waiting for default service account to be created ...
	I0130 20:59:55.954848   50967 default_sa.go:45] found service account: "default"
	I0130 20:59:55.954880   50967 default_sa.go:55] duration metric: took 196.485254ms for default service account to be created ...
	I0130 20:59:55.954889   50967 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 20:59:56.158958   50967 system_pods.go:86] 8 kube-system pods found
	I0130 20:59:56.158985   50967 system_pods.go:89] "coredns-5dd5756b68-4n7q6" [9b6b7d30-ad8d-476c-b639-8a696d171308] Running
	I0130 20:59:56.158991   50967 system_pods.go:89] "etcd-kindnet-997045" [81295f59-d2fd-479f-8039-389960eb1553] Running
	I0130 20:59:56.158995   50967 system_pods.go:89] "kindnet-lf2vl" [af5c4a07-fc0f-41ad-a4ce-849faf43fb6e] Running
	I0130 20:59:56.158999   50967 system_pods.go:89] "kube-apiserver-kindnet-997045" [233f1694-5d22-48ad-bb1c-a76b3da8a904] Running
	I0130 20:59:56.159003   50967 system_pods.go:89] "kube-controller-manager-kindnet-997045" [c036f4fd-3b6e-45b5-94f1-c63c589c6778] Running
	I0130 20:59:56.159008   50967 system_pods.go:89] "kube-proxy-g2mkf" [4ce600ee-fd06-4c38-b5db-c5a543930d62] Running
	I0130 20:59:56.159012   50967 system_pods.go:89] "kube-scheduler-kindnet-997045" [9164ee8a-b1de-47eb-8113-a95f4e6ce511] Running
	I0130 20:59:56.159016   50967 system_pods.go:89] "storage-provisioner" [72eabce9-418f-4324-9dcc-0d8cb6ff5d92] Running
	I0130 20:59:56.159022   50967 system_pods.go:126] duration metric: took 204.128295ms to wait for k8s-apps to be running ...
	I0130 20:59:56.159028   50967 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 20:59:56.159071   50967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 20:59:56.179051   50967 system_svc.go:56] duration metric: took 20.013812ms WaitForService to wait for kubelet.
	I0130 20:59:56.179080   50967 kubeadm.go:581] duration metric: took 8.841932569s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 20:59:56.179105   50967 node_conditions.go:102] verifying NodePressure condition ...
	I0130 20:59:56.355261   50967 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 20:59:56.355313   50967 node_conditions.go:123] node cpu capacity is 2
	I0130 20:59:56.355326   50967 node_conditions.go:105] duration metric: took 176.216332ms to run NodePressure ...
	I0130 20:59:56.355337   50967 start.go:228] waiting for startup goroutines ...
	I0130 20:59:56.355342   50967 start.go:233] waiting for cluster config update ...
	I0130 20:59:56.355353   50967 start.go:242] writing updated cluster config ...
	I0130 20:59:56.355644   50967 ssh_runner.go:195] Run: rm -f paused
	I0130 20:59:56.420369   50967 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 20:59:56.422379   50967 out.go:177] * Done! kubectl is now configured to use "kindnet-997045" cluster and "default" namespace by default
	I0130 20:59:56.289388   50715 pod_ready.go:102] pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace has status "Ready":"False"
	I0130 20:59:58.295449   50715 pod_ready.go:102] pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace has status "Ready":"False"
	I0130 20:59:55.181859   51620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:59:55.267004   51620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 20:59:55.340312   51620 api_server.go:52] waiting for apiserver process to appear ...
	I0130 20:59:55.340391   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:59:55.841418   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:59:56.340664   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:59:56.840618   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:59:57.340556   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:59:57.840502   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:59:58.341412   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 20:59:58.370129   51620 api_server.go:72] duration metric: took 3.029817986s to wait for apiserver process to appear ...
	I0130 20:59:58.370157   51620 api_server.go:88] waiting for apiserver healthz status ...
	I0130 20:59:58.370184   51620 api_server.go:253] Checking apiserver healthz at https://192.168.39.184:8443/healthz ...
	I0130 21:00:01.962014   51620 api_server.go:279] https://192.168.39.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 21:00:01.962043   51620 api_server.go:103] status: https://192.168.39.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 21:00:01.962058   51620 api_server.go:253] Checking apiserver healthz at https://192.168.39.184:8443/healthz ...
	I0130 21:00:02.048044   51620 api_server.go:279] https://192.168.39.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 21:00:02.048073   51620 api_server.go:103] status: https://192.168.39.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 21:00:02.370483   51620 api_server.go:253] Checking apiserver healthz at https://192.168.39.184:8443/healthz ...
	I0130 21:00:02.376190   51620 api_server.go:279] https://192.168.39.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 21:00:02.376232   51620 api_server.go:103] status: https://192.168.39.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 21:00:02.870474   51620 api_server.go:253] Checking apiserver healthz at https://192.168.39.184:8443/healthz ...
	I0130 21:00:02.876419   51620 api_server.go:279] https://192.168.39.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 21:00:02.876453   51620 api_server.go:103] status: https://192.168.39.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 21:00:03.370299   51620 api_server.go:253] Checking apiserver healthz at https://192.168.39.184:8443/healthz ...
	I0130 21:00:03.375839   51620 api_server.go:279] https://192.168.39.184:8443/healthz returned 200:
	ok
	I0130 21:00:03.389626   51620 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 21:00:03.389656   51620 api_server.go:131] duration metric: took 5.019490112s to wait for apiserver health ...
	I0130 21:00:03.389667   51620 cni.go:84] Creating CNI manager for ""
	I0130 21:00:03.389676   51620 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 21:00:03.391299   51620 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 21:00:00.788577   50715 pod_ready.go:102] pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace has status "Ready":"False"
	I0130 21:00:02.294909   50715 pod_ready.go:92] pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace has status "Ready":"True"
	I0130 21:00:02.294938   50715 pod_ready.go:81] duration metric: took 38.014386126s waiting for pod "coredns-5dd5756b68-cz9q4" in "kube-system" namespace to be "Ready" ...
	I0130 21:00:02.294951   50715 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-997045" in "kube-system" namespace to be "Ready" ...
	I0130 21:00:02.304941   50715 pod_ready.go:92] pod "etcd-auto-997045" in "kube-system" namespace has status "Ready":"True"
	I0130 21:00:02.304963   50715 pod_ready.go:81] duration metric: took 10.004623ms waiting for pod "etcd-auto-997045" in "kube-system" namespace to be "Ready" ...
	I0130 21:00:02.304976   50715 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-997045" in "kube-system" namespace to be "Ready" ...
	I0130 21:00:02.315419   50715 pod_ready.go:92] pod "kube-apiserver-auto-997045" in "kube-system" namespace has status "Ready":"True"
	I0130 21:00:02.315442   50715 pod_ready.go:81] duration metric: took 10.458266ms waiting for pod "kube-apiserver-auto-997045" in "kube-system" namespace to be "Ready" ...
	I0130 21:00:02.315454   50715 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-997045" in "kube-system" namespace to be "Ready" ...
	I0130 21:00:02.322869   50715 pod_ready.go:92] pod "kube-controller-manager-auto-997045" in "kube-system" namespace has status "Ready":"True"
	I0130 21:00:02.322890   50715 pod_ready.go:81] duration metric: took 7.427466ms waiting for pod "kube-controller-manager-auto-997045" in "kube-system" namespace to be "Ready" ...
	I0130 21:00:02.322901   50715 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-w6dgf" in "kube-system" namespace to be "Ready" ...
	I0130 21:00:02.329382   50715 pod_ready.go:92] pod "kube-proxy-w6dgf" in "kube-system" namespace has status "Ready":"True"
	I0130 21:00:02.329410   50715 pod_ready.go:81] duration metric: took 6.50148ms waiting for pod "kube-proxy-w6dgf" in "kube-system" namespace to be "Ready" ...
	I0130 21:00:02.329423   50715 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-997045" in "kube-system" namespace to be "Ready" ...
	I0130 21:00:02.686081   50715 pod_ready.go:92] pod "kube-scheduler-auto-997045" in "kube-system" namespace has status "Ready":"True"
	I0130 21:00:02.686105   50715 pod_ready.go:81] duration metric: took 356.675008ms waiting for pod "kube-scheduler-auto-997045" in "kube-system" namespace to be "Ready" ...
	I0130 21:00:02.686118   50715 pod_ready.go:38] duration metric: took 38.429474426s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 21:00:02.686131   50715 api_server.go:52] waiting for apiserver process to appear ...
	I0130 21:00:02.686174   50715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 21:00:02.711817   50715 api_server.go:72] duration metric: took 40.543273519s to wait for apiserver process to appear ...
	I0130 21:00:02.711842   50715 api_server.go:88] waiting for apiserver healthz status ...
	I0130 21:00:02.711862   50715 api_server.go:253] Checking apiserver healthz at https://192.168.50.14:8443/healthz ...
	I0130 21:00:02.719570   50715 api_server.go:279] https://192.168.50.14:8443/healthz returned 200:
	ok
	I0130 21:00:02.721355   50715 api_server.go:141] control plane version: v1.28.4
	I0130 21:00:02.721376   50715 api_server.go:131] duration metric: took 9.527873ms to wait for apiserver health ...
	I0130 21:00:02.721383   50715 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 21:00:02.891213   50715 system_pods.go:59] 7 kube-system pods found
	I0130 21:00:02.891298   50715 system_pods.go:61] "coredns-5dd5756b68-cz9q4" [89b5b042-1748-4057-bde8-a244013ce380] Running
	I0130 21:00:02.891309   50715 system_pods.go:61] "etcd-auto-997045" [0c0a08f7-ff8d-44f0-b42e-7275f65959bb] Running
	I0130 21:00:02.891316   50715 system_pods.go:61] "kube-apiserver-auto-997045" [b5d3e397-933f-4a5b-94f9-c9d0293974a6] Running
	I0130 21:00:02.891325   50715 system_pods.go:61] "kube-controller-manager-auto-997045" [13f28956-ab26-4175-a75e-89c8c39b656e] Running
	I0130 21:00:02.891332   50715 system_pods.go:61] "kube-proxy-w6dgf" [381e040e-5aba-4349-b51e-5030de7fc50d] Running
	I0130 21:00:02.891338   50715 system_pods.go:61] "kube-scheduler-auto-997045" [14440127-0c3e-4f7a-9fb7-8b7c2b9d5bb2] Running
	I0130 21:00:02.891343   50715 system_pods.go:61] "storage-provisioner" [8de5220e-791f-4b98-a684-2642a0ff369f] Running
	I0130 21:00:02.891351   50715 system_pods.go:74] duration metric: took 169.96169ms to wait for pod list to return data ...
	I0130 21:00:02.891360   50715 default_sa.go:34] waiting for default service account to be created ...
	I0130 21:00:03.085753   50715 default_sa.go:45] found service account: "default"
	I0130 21:00:03.085778   50715 default_sa.go:55] duration metric: took 194.411052ms for default service account to be created ...
	I0130 21:00:03.085787   50715 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 21:00:03.289068   50715 system_pods.go:86] 7 kube-system pods found
	I0130 21:00:03.289101   50715 system_pods.go:89] "coredns-5dd5756b68-cz9q4" [89b5b042-1748-4057-bde8-a244013ce380] Running
	I0130 21:00:03.289109   50715 system_pods.go:89] "etcd-auto-997045" [0c0a08f7-ff8d-44f0-b42e-7275f65959bb] Running
	I0130 21:00:03.289115   50715 system_pods.go:89] "kube-apiserver-auto-997045" [b5d3e397-933f-4a5b-94f9-c9d0293974a6] Running
	I0130 21:00:03.289120   50715 system_pods.go:89] "kube-controller-manager-auto-997045" [13f28956-ab26-4175-a75e-89c8c39b656e] Running
	I0130 21:00:03.289124   50715 system_pods.go:89] "kube-proxy-w6dgf" [381e040e-5aba-4349-b51e-5030de7fc50d] Running
	I0130 21:00:03.289128   50715 system_pods.go:89] "kube-scheduler-auto-997045" [14440127-0c3e-4f7a-9fb7-8b7c2b9d5bb2] Running
	I0130 21:00:03.289132   50715 system_pods.go:89] "storage-provisioner" [8de5220e-791f-4b98-a684-2642a0ff369f] Running
	I0130 21:00:03.289139   50715 system_pods.go:126] duration metric: took 203.346694ms to wait for k8s-apps to be running ...
	I0130 21:00:03.289147   50715 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 21:00:03.289196   50715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 21:00:03.312379   50715 system_svc.go:56] duration metric: took 23.220766ms WaitForService to wait for kubelet.
	I0130 21:00:03.312406   50715 kubeadm.go:581] duration metric: took 41.143869897s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 21:00:03.312428   50715 node_conditions.go:102] verifying NodePressure condition ...
	I0130 21:00:03.485321   50715 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 21:00:03.485352   50715 node_conditions.go:123] node cpu capacity is 2
	I0130 21:00:03.485364   50715 node_conditions.go:105] duration metric: took 172.931695ms to run NodePressure ...
	I0130 21:00:03.485378   50715 start.go:228] waiting for startup goroutines ...
	I0130 21:00:03.485386   50715 start.go:233] waiting for cluster config update ...
	I0130 21:00:03.485396   50715 start.go:242] writing updated cluster config ...
	I0130 21:00:03.485617   50715 ssh_runner.go:195] Run: rm -f paused
	I0130 21:00:03.543072   50715 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 21:00:03.544963   50715 out.go:177] * Done! kubectl is now configured to use "auto-997045" cluster and "default" namespace by default
	I0130 21:00:03.392529   51620 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 21:00:03.420810   51620 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 21:00:03.447382   51620 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 21:00:03.460587   51620 system_pods.go:59] 8 kube-system pods found
	I0130 21:00:03.460623   51620 system_pods.go:61] "coredns-76f75df574-jpvcn" [eda4c8fc-de07-44cf-bc77-435c3e0a2acb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 21:00:03.460634   51620 system_pods.go:61] "etcd-newest-cni-564644" [02e2369a-6144-47a2-9ceb-b1a026b3eb51] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 21:00:03.460646   51620 system_pods.go:61] "kube-apiserver-newest-cni-564644" [475d6a31-edf7-4aab-ac5a-eec35b1a9495] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 21:00:03.460655   51620 system_pods.go:61] "kube-controller-manager-newest-cni-564644" [33ee3498-572f-4020-8941-2856b6106c2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 21:00:03.460670   51620 system_pods.go:61] "kube-proxy-qm4lr" [e1b54f33-0c5f-4d63-83f2-a09b2a015fa2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 21:00:03.460686   51620 system_pods.go:61] "kube-scheduler-newest-cni-564644" [bac0d4db-ca11-43e2-89cb-f91f18dc36cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 21:00:03.460695   51620 system_pods.go:61] "metrics-server-57f55c9bc5-gsg5c" [7653027d-478e-4ade-972e-d20effe7fe08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 21:00:03.460706   51620 system_pods.go:61] "storage-provisioner" [6db9cb6b-5319-4a7c-a995-cc7fd865ce8f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 21:00:03.460722   51620 system_pods.go:74] duration metric: took 13.322538ms to wait for pod list to return data ...
	I0130 21:00:03.460732   51620 node_conditions.go:102] verifying NodePressure condition ...
	I0130 21:00:03.465371   51620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 21:00:03.465402   51620 node_conditions.go:123] node cpu capacity is 2
	I0130 21:00:03.465415   51620 node_conditions.go:105] duration metric: took 4.676648ms to run NodePressure ...
	I0130 21:00:03.465440   51620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 21:00:03.903060   51620 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 21:00:03.931575   51620 ops.go:34] apiserver oom_adj: -16
	I0130 21:00:03.931642   51620 kubeadm.go:640] restartCluster took 20.341975782s
	I0130 21:00:03.931654   51620 kubeadm.go:406] StartCluster complete in 20.39251243s
	I0130 21:00:03.931674   51620 settings.go:142] acquiring lock: {Name:mkb1419b2f0c2fce4a4d2a19a29b0c0842cda583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:00:03.931739   51620 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 21:00:03.933778   51620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/kubeconfig: {Name:mk2f189e5bfe50a64039d4cc6051d185b70ff3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:00:03.934325   51620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 21:00:03.934469   51620 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 21:00:03.934543   51620 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-564644"
	I0130 21:00:03.934562   51620 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-564644"
	W0130 21:00:03.934573   51620 addons.go:243] addon storage-provisioner should already be in state true
	I0130 21:00:03.934600   51620 config.go:182] Loaded profile config "newest-cni-564644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 21:00:03.934621   51620 host.go:66] Checking if "newest-cni-564644" exists ...
	I0130 21:00:03.934665   51620 addons.go:69] Setting default-storageclass=true in profile "newest-cni-564644"
	I0130 21:00:03.934689   51620 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-564644"
	I0130 21:00:03.935040   51620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:00:03.935072   51620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:00:03.935087   51620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:00:03.935121   51620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:00:03.935293   51620 addons.go:69] Setting metrics-server=true in profile "newest-cni-564644"
	I0130 21:00:03.935310   51620 addons.go:234] Setting addon metrics-server=true in "newest-cni-564644"
	W0130 21:00:03.935317   51620 addons.go:243] addon metrics-server should already be in state true
	I0130 21:00:03.935366   51620 host.go:66] Checking if "newest-cni-564644" exists ...
	I0130 21:00:03.935551   51620 addons.go:69] Setting dashboard=true in profile "newest-cni-564644"
	I0130 21:00:03.935569   51620 addons.go:234] Setting addon dashboard=true in "newest-cni-564644"
	W0130 21:00:03.935577   51620 addons.go:243] addon dashboard should already be in state true
	I0130 21:00:03.935620   51620 host.go:66] Checking if "newest-cni-564644" exists ...
	I0130 21:00:03.935699   51620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:00:03.935716   51620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:00:03.935991   51620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:00:03.936011   51620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:00:03.947649   51620 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-564644" context rescaled to 1 replicas
	I0130 21:00:03.947691   51620 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 21:00:03.949124   51620 out.go:177] * Verifying Kubernetes components...
	I0130 21:00:03.950317   51620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 21:00:03.956319   51620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I0130 21:00:03.956960   51620 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:00:03.957611   51620 main.go:141] libmachine: Using API Version  1
	I0130 21:00:03.957630   51620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:00:03.958044   51620 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:00:03.958247   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetState
	I0130 21:00:03.961699   51620 addons.go:234] Setting addon default-storageclass=true in "newest-cni-564644"
	W0130 21:00:03.961721   51620 addons.go:243] addon default-storageclass should already be in state true
	I0130 21:00:03.961749   51620 host.go:66] Checking if "newest-cni-564644" exists ...
	I0130 21:00:03.962200   51620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:00:03.962233   51620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:00:03.962440   51620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41833
	I0130 21:00:03.962875   51620 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:00:03.963654   51620 main.go:141] libmachine: Using API Version  1
	I0130 21:00:03.963702   51620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:00:03.964482   51620 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:00:03.965275   51620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46213
	I0130 21:00:03.966021   51620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:00:03.966059   51620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:00:03.966285   51620 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:00:03.967476   51620 main.go:141] libmachine: Using API Version  1
	I0130 21:00:03.967494   51620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:00:03.967922   51620 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:00:03.968478   51620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:00:03.968516   51620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:00:03.969488   51620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33129
	I0130 21:00:03.970263   51620 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:00:03.970880   51620 main.go:141] libmachine: Using API Version  1
	I0130 21:00:03.970925   51620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:00:03.971350   51620 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:00:03.971890   51620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:00:03.971924   51620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:00:03.985732   51620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44473
	I0130 21:00:03.986258   51620 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:00:03.986788   51620 main.go:141] libmachine: Using API Version  1
	I0130 21:00:03.986820   51620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:00:03.987200   51620 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:00:03.987394   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetState
	I0130 21:00:03.989318   51620 main.go:141] libmachine: (newest-cni-564644) Calling .DriverName
	I0130 21:00:03.997363   51620 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 21:00:03.998710   51620 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 21:00:03.998723   51620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 21:00:03.998742   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHHostname
	I0130 21:00:03.999425   51620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33099
	I0130 21:00:04.006021   51620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I0130 21:00:04.006589   51620 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:00:04.006701   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 21:00:04.007205   51620 main.go:141] libmachine: Using API Version  1
	I0130 21:00:04.007226   51620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:00:04.007317   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:d5:bf", ip: ""} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 21:00:04.007339   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 21:00:04.007382   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHPort
	I0130 21:00:04.007558   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHKeyPath
	I0130 21:00:04.007635   51620 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:00:04.007710   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHUsername
	I0130 21:00:04.008212   51620 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/newest-cni-564644/id_rsa Username:docker}
	I0130 21:00:04.008735   51620 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:00:04.008937   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetState
	I0130 21:00:04.015401   51620 main.go:141] libmachine: (newest-cni-564644) Calling .DriverName
	I0130 21:00:04.015638   51620 main.go:141] libmachine: Using API Version  1
	I0130 21:00:04.015662   51620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:00:04.016183   51620 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:00:04.016377   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetState
	I0130 21:00:04.017456   51620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I0130 21:00:04.017851   51620 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:00:04.018616   51620 main.go:141] libmachine: Using API Version  1
	I0130 21:00:04.018639   51620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:00:04.018850   51620 main.go:141] libmachine: (newest-cni-564644) Calling .DriverName
	I0130 21:00:04.023404   51620 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:00:04.024980   51620 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0130 21:00:04.026449   51620 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0130 21:00:04.027723   51620 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0130 21:00:04.027740   51620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0130 21:00:04.027769   51620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:00:04.027794   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHHostname
	I0130 21:00:04.027809   51620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:00:04.026390   51620 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 21:00:04.029357   51620 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 21:00:04.029375   51620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 21:00:04.029392   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHHostname
	I0130 21:00:04.032972   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 21:00:04.033943   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 21:00:04.034112   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:d5:bf", ip: ""} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 21:00:04.034132   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 21:00:04.034509   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHPort
	I0130 21:00:04.034784   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHPort
	I0130 21:00:04.034798   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHKeyPath
	I0130 21:00:04.034658   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:d5:bf", ip: ""} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 21:00:04.034896   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 21:00:04.034964   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHUsername
	I0130 21:00:04.035119   51620 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/newest-cni-564644/id_rsa Username:docker}
	I0130 21:00:04.035461   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHKeyPath
	I0130 21:00:04.035602   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHUsername
	I0130 21:00:04.039449   51620 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/newest-cni-564644/id_rsa Username:docker}
	I0130 21:00:04.050773   51620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36537
	I0130 21:00:04.051352   51620 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:00:04.051912   51620 main.go:141] libmachine: Using API Version  1
	I0130 21:00:04.051932   51620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:00:04.055659   51620 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:00:04.055933   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetState
	I0130 21:00:04.057908   51620 main.go:141] libmachine: (newest-cni-564644) Calling .DriverName
	I0130 21:00:04.058164   51620 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 21:00:04.058176   51620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 21:00:04.058189   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHHostname
	I0130 21:00:04.061439   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 21:00:04.061873   51620 main.go:141] libmachine: (newest-cni-564644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:d5:bf", ip: ""} in network mk-newest-cni-564644: {Iface:virbr1 ExpiryTime:2024-01-30 21:59:26 +0000 UTC Type:0 Mac:52:54:00:b2:d5:bf Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:newest-cni-564644 Clientid:01:52:54:00:b2:d5:bf}
	I0130 21:00:04.061904   51620 main.go:141] libmachine: (newest-cni-564644) DBG | domain newest-cni-564644 has defined IP address 192.168.39.184 and MAC address 52:54:00:b2:d5:bf in network mk-newest-cni-564644
	I0130 21:00:04.062140   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHPort
	I0130 21:00:04.062315   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHKeyPath
	I0130 21:00:04.062469   51620 main.go:141] libmachine: (newest-cni-564644) Calling .GetSSHUsername
	I0130 21:00:04.062599   51620 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/newest-cni-564644/id_rsa Username:docker}
	I0130 21:00:04.171909   51620 api_server.go:52] waiting for apiserver process to appear ...
	I0130 21:00:04.171998   51620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 21:00:04.172280   51620 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0130 21:00:04.196907   51620 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 21:00:04.196955   51620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 21:00:04.210176   51620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 21:00:04.216620   51620 api_server.go:72] duration metric: took 268.899854ms to wait for apiserver process to appear ...
	I0130 21:00:04.216643   51620 api_server.go:88] waiting for apiserver healthz status ...
	I0130 21:00:04.216664   51620 api_server.go:253] Checking apiserver healthz at https://192.168.39.184:8443/healthz ...
	I0130 21:00:04.254260   51620 api_server.go:279] https://192.168.39.184:8443/healthz returned 200:
	ok
	I0130 21:00:04.257849   51620 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 21:00:04.257871   51620 api_server.go:131] duration metric: took 41.21988ms to wait for apiserver health ...
	I0130 21:00:04.257882   51620 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 21:00:04.258107   51620 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 21:00:04.258119   51620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 21:00:04.259759   51620 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0130 21:00:04.259775   51620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0130 21:00:04.263091   51620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 21:00:04.283608   51620 system_pods.go:59] 8 kube-system pods found
	I0130 21:00:04.284012   51620 system_pods.go:61] "coredns-76f75df574-jpvcn" [eda4c8fc-de07-44cf-bc77-435c3e0a2acb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 21:00:04.284054   51620 system_pods.go:61] "etcd-newest-cni-564644" [02e2369a-6144-47a2-9ceb-b1a026b3eb51] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 21:00:04.284092   51620 system_pods.go:61] "kube-apiserver-newest-cni-564644" [475d6a31-edf7-4aab-ac5a-eec35b1a9495] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 21:00:04.284131   51620 system_pods.go:61] "kube-controller-manager-newest-cni-564644" [33ee3498-572f-4020-8941-2856b6106c2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 21:00:04.284154   51620 system_pods.go:61] "kube-proxy-qm4lr" [e1b54f33-0c5f-4d63-83f2-a09b2a015fa2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 21:00:04.284172   51620 system_pods.go:61] "kube-scheduler-newest-cni-564644" [bac0d4db-ca11-43e2-89cb-f91f18dc36cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 21:00:04.284208   51620 system_pods.go:61] "metrics-server-57f55c9bc5-gsg5c" [7653027d-478e-4ade-972e-d20effe7fe08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 21:00:04.284239   51620 system_pods.go:61] "storage-provisioner" [6db9cb6b-5319-4a7c-a995-cc7fd865ce8f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 21:00:04.284256   51620 system_pods.go:74] duration metric: took 26.366526ms to wait for pod list to return data ...
	I0130 21:00:04.284290   51620 default_sa.go:34] waiting for default service account to be created ...
	I0130 21:00:04.295341   51620 default_sa.go:45] found service account: "default"
	I0130 21:00:04.295364   51620 default_sa.go:55] duration metric: took 11.057446ms for default service account to be created ...
	I0130 21:00:04.295375   51620 kubeadm.go:581] duration metric: took 347.660404ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0130 21:00:04.295393   51620 node_conditions.go:102] verifying NodePressure condition ...
	I0130 21:00:04.313175   51620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 21:00:04.313212   51620 node_conditions.go:123] node cpu capacity is 2
	I0130 21:00:04.313224   51620 node_conditions.go:105] duration metric: took 17.810158ms to run NodePressure ...
	I0130 21:00:04.313236   51620 start.go:228] waiting for startup goroutines ...
	I0130 21:00:04.317827   51620 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 21:00:04.317845   51620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 21:00:04.318962   51620 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0130 21:00:04.318980   51620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0130 21:00:04.364292   51620 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0130 21:00:04.364321   51620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0130 21:00:04.381587   51620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 21:00:04.434181   51620 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0130 21:00:04.434262   51620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0130 21:00:04.545121   51620 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0130 21:00:04.545155   51620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0130 21:00:04.619798   51620 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0130 21:00:04.619831   51620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0130 21:00:04.659614   51620 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0130 21:00:04.659689   51620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0130 21:00:04.729694   51620 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0130 21:00:04.729783   51620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0130 21:00:04.795757   51620 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0130 21:00:04.795796   51620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0130 21:00:04.818796   51620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0130 21:00:06.297873   51620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.034722203s)
	I0130 21:00:06.298002   51620 main.go:141] libmachine: Making call to close driver server
	I0130 21:00:06.298048   51620 main.go:141] libmachine: (newest-cni-564644) Calling .Close
	I0130 21:00:06.298250   51620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.916622385s)
	I0130 21:00:06.298302   51620 main.go:141] libmachine: Making call to close driver server
	I0130 21:00:06.298325   51620 main.go:141] libmachine: (newest-cni-564644) Calling .Close
	I0130 21:00:06.300514   51620 main.go:141] libmachine: (newest-cni-564644) DBG | Closing plugin on server side
	I0130 21:00:06.300538   51620 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:00:06.300590   51620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:00:06.300611   51620 main.go:141] libmachine: Making call to close driver server
	I0130 21:00:06.300647   51620 main.go:141] libmachine: (newest-cni-564644) Calling .Close
	I0130 21:00:06.300591   51620 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:00:06.300565   51620 main.go:141] libmachine: (newest-cni-564644) DBG | Closing plugin on server side
	I0130 21:00:06.300808   51620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:00:06.300844   51620 main.go:141] libmachine: Making call to close driver server
	I0130 21:00:06.300860   51620 main.go:141] libmachine: (newest-cni-564644) Calling .Close
	I0130 21:00:06.300907   51620 main.go:141] libmachine: (newest-cni-564644) DBG | Closing plugin on server side
	I0130 21:00:06.300926   51620 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:00:06.300940   51620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:00:06.300947   51620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.09074183s)
	I0130 21:00:06.300955   51620 addons.go:470] Verifying addon metrics-server=true in "newest-cni-564644"
	I0130 21:00:06.300966   51620 main.go:141] libmachine: Making call to close driver server
	I0130 21:00:06.300993   51620 main.go:141] libmachine: (newest-cni-564644) Calling .Close
	I0130 21:00:06.301155   51620 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:00:06.301172   51620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:00:06.301200   51620 main.go:141] libmachine: (newest-cni-564644) DBG | Closing plugin on server side
	I0130 21:00:06.302742   51620 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:00:06.302767   51620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:00:06.302798   51620 main.go:141] libmachine: Making call to close driver server
	I0130 21:00:06.303072   51620 main.go:141] libmachine: (newest-cni-564644) Calling .Close
	I0130 21:00:06.304620   51620 main.go:141] libmachine: (newest-cni-564644) DBG | Closing plugin on server side
	I0130 21:00:06.304671   51620 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:00:06.304696   51620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:00:06.311589   51620 main.go:141] libmachine: Making call to close driver server
	I0130 21:00:06.311609   51620 main.go:141] libmachine: (newest-cni-564644) Calling .Close
	I0130 21:00:06.313378   51620 main.go:141] libmachine: (newest-cni-564644) DBG | Closing plugin on server side
	I0130 21:00:06.313469   51620 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:00:06.313502   51620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:00:06.723687   51620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.904796773s)
	I0130 21:00:06.723816   51620 main.go:141] libmachine: Making call to close driver server
	I0130 21:00:06.723919   51620 main.go:141] libmachine: (newest-cni-564644) Calling .Close
	I0130 21:00:06.726015   51620 main.go:141] libmachine: (newest-cni-564644) DBG | Closing plugin on server side
	I0130 21:00:06.726304   51620 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:00:06.726353   51620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:00:06.726378   51620 main.go:141] libmachine: Making call to close driver server
	I0130 21:00:06.726497   51620 main.go:141] libmachine: (newest-cni-564644) Calling .Close
	I0130 21:00:06.726798   51620 main.go:141] libmachine: (newest-cni-564644) DBG | Closing plugin on server side
	I0130 21:00:06.728336   51620 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:00:06.728398   51620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:00:06.730618   51620 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-564644 addons enable metrics-server
	
	I0130 21:00:06.732530   51620 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0130 21:00:06.733976   51620 addons.go:505] enable addons completed in 2.799504744s: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0130 21:00:06.734040   51620 start.go:233] waiting for cluster config update ...
	I0130 21:00:06.734070   51620 start.go:242] writing updated cluster config ...
	I0130 21:00:06.734446   51620 ssh_runner.go:195] Run: rm -f paused
	I0130 21:00:06.808690   51620 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0130 21:00:06.810710   51620 out.go:177] * Done! kubectl is now configured to use "newest-cni-564644" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 20:38:35 UTC, ends at Tue 2024-01-30 21:00:12 UTC. --
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.403256986Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648412403243816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ad9e615e-dbc6-4d3c-9610-2ab7e4e8439c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.403846369Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=06e55164-4adf-495a-accb-6137baa673f1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.403911368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=06e55164-4adf-495a-accb-6137baa673f1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.404119640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06,PodSandboxId:1fc3944662b8d0b5fb57c838a2af035185febd102c2896bc7ff1caceb828d5cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647449161787190,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db1a28e4-0c45-496e-a566-32a402b0841d,},Annotations:map[string]string{io.kubernetes.container.hash: fa069038,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe,PodSandboxId:6235f75afb8495e85b6e93de545aa4475234eb83a70af77b92651226eb347b33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706647448348849188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-59zvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6ef754-0898-4e1d-9ff2-9f42f456db6c,},Annotations:map[string]string{io.kubernetes.container.hash: fc0ce254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb,PodSandboxId:dd24181a872bf8b7293c77bb33bb2df2421b8c86da93296fb364d481237e104f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706647447880044260,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tlb8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547c1fe4-3ef7-421a-b460-660a05caa2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 9eba2324,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15,PodSandboxId:860cedfaac3b1a7a22c5dc5445248817e838010afbb5cb6d34ea13a10a944831,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706647424639917248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea2ea4b2c15f963
45c2278a0529553,},Annotations:map[string]string{io.kubernetes.container.hash: 567c6d13,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7,PodSandboxId:1415e35a0f476876c8b6cd2446b5b3163487b8d45e7328127bfdc64e5a3f2cf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706647424520860936,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28b9261c0610f04c
da0f868a5f8092d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed,PodSandboxId:c2eead1ebd494f3b848e4ef6632be9bd0f0f3a9be20fcfe4e306723f974fb1e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706647424083566300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1ec6e77489a4ee974a22d52af3263b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481,PodSandboxId:5a6ccefe9a301e15a8fba5ade40baa6df4de253a70755e287d136b7dd2197abb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706647423966462673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9429139e233673bb34bc19e0a38b20e3,},Annotations:map[string]string{io.kubernetes.container.hash: 30868f22,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=06e55164-4adf-495a-accb-6137baa673f1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.452586537Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ea8883f4-b1d6-4233-a61f-8b8d8e083dff name=/runtime.v1.RuntimeService/Version
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.452669516Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ea8883f4-b1d6-4233-a61f-8b8d8e083dff name=/runtime.v1.RuntimeService/Version
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.454304782Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=afc3b10a-97fa-4d7d-a660-95f7c8d03dd4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.454972519Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648412454949189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=afc3b10a-97fa-4d7d-a660-95f7c8d03dd4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.456013441Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c6ffae6c-a0e3-4ad4-9948-c23afe82aa78 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.456077257Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c6ffae6c-a0e3-4ad4-9948-c23afe82aa78 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.456302338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06,PodSandboxId:1fc3944662b8d0b5fb57c838a2af035185febd102c2896bc7ff1caceb828d5cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647449161787190,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db1a28e4-0c45-496e-a566-32a402b0841d,},Annotations:map[string]string{io.kubernetes.container.hash: fa069038,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe,PodSandboxId:6235f75afb8495e85b6e93de545aa4475234eb83a70af77b92651226eb347b33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706647448348849188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-59zvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6ef754-0898-4e1d-9ff2-9f42f456db6c,},Annotations:map[string]string{io.kubernetes.container.hash: fc0ce254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb,PodSandboxId:dd24181a872bf8b7293c77bb33bb2df2421b8c86da93296fb364d481237e104f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706647447880044260,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tlb8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547c1fe4-3ef7-421a-b460-660a05caa2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 9eba2324,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15,PodSandboxId:860cedfaac3b1a7a22c5dc5445248817e838010afbb5cb6d34ea13a10a944831,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706647424639917248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea2ea4b2c15f963
45c2278a0529553,},Annotations:map[string]string{io.kubernetes.container.hash: 567c6d13,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7,PodSandboxId:1415e35a0f476876c8b6cd2446b5b3163487b8d45e7328127bfdc64e5a3f2cf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706647424520860936,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28b9261c0610f04c
da0f868a5f8092d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed,PodSandboxId:c2eead1ebd494f3b848e4ef6632be9bd0f0f3a9be20fcfe4e306723f974fb1e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706647424083566300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1ec6e77489a4ee974a22d52af3263b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481,PodSandboxId:5a6ccefe9a301e15a8fba5ade40baa6df4de253a70755e287d136b7dd2197abb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706647423966462673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9429139e233673bb34bc19e0a38b20e3,},Annotations:map[string]string{io.kubernetes.container.hash: 30868f22,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c6ffae6c-a0e3-4ad4-9948-c23afe82aa78 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.500264383Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b3c03e01-0d13-4699-9e48-5f1b48d07543 name=/runtime.v1.RuntimeService/Version
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.500323062Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b3c03e01-0d13-4699-9e48-5f1b48d07543 name=/runtime.v1.RuntimeService/Version
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.502180946Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=86154f08-548e-429d-bcd0-dd6a8425eeca name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.502637905Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648412502621535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=86154f08-548e-429d-bcd0-dd6a8425eeca name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.503608527Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ea42f3ee-baa8-4fb8-b143-ae73e38095c4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.503655961Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ea42f3ee-baa8-4fb8-b143-ae73e38095c4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.503829171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06,PodSandboxId:1fc3944662b8d0b5fb57c838a2af035185febd102c2896bc7ff1caceb828d5cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647449161787190,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db1a28e4-0c45-496e-a566-32a402b0841d,},Annotations:map[string]string{io.kubernetes.container.hash: fa069038,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe,PodSandboxId:6235f75afb8495e85b6e93de545aa4475234eb83a70af77b92651226eb347b33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706647448348849188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-59zvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6ef754-0898-4e1d-9ff2-9f42f456db6c,},Annotations:map[string]string{io.kubernetes.container.hash: fc0ce254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb,PodSandboxId:dd24181a872bf8b7293c77bb33bb2df2421b8c86da93296fb364d481237e104f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706647447880044260,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tlb8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547c1fe4-3ef7-421a-b460-660a05caa2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 9eba2324,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15,PodSandboxId:860cedfaac3b1a7a22c5dc5445248817e838010afbb5cb6d34ea13a10a944831,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706647424639917248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea2ea4b2c15f963
45c2278a0529553,},Annotations:map[string]string{io.kubernetes.container.hash: 567c6d13,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7,PodSandboxId:1415e35a0f476876c8b6cd2446b5b3163487b8d45e7328127bfdc64e5a3f2cf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706647424520860936,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28b9261c0610f04c
da0f868a5f8092d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed,PodSandboxId:c2eead1ebd494f3b848e4ef6632be9bd0f0f3a9be20fcfe4e306723f974fb1e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706647424083566300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1ec6e77489a4ee974a22d52af3263b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481,PodSandboxId:5a6ccefe9a301e15a8fba5ade40baa6df4de253a70755e287d136b7dd2197abb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706647423966462673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9429139e233673bb34bc19e0a38b20e3,},Annotations:map[string]string{io.kubernetes.container.hash: 30868f22,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ea42f3ee-baa8-4fb8-b143-ae73e38095c4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.544922693Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9dad3fb1-edd8-4829-abfa-e0145822480f name=/runtime.v1.RuntimeService/Version
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.544979647Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9dad3fb1-edd8-4829-abfa-e0145822480f name=/runtime.v1.RuntimeService/Version
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.546636365Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=05a7ea4c-fa1f-493c-a26c-4d84ddf9c09a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.547071999Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648412547046524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=05a7ea4c-fa1f-493c-a26c-4d84ddf9c09a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.547897143Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8d991b14-c632-4d1d-ae8b-172af7e83c20 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.547941589Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8d991b14-c632-4d1d-ae8b-172af7e83c20 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:00:12 default-k8s-diff-port-877742 crio[712]: time="2024-01-30 21:00:12.548087953Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06,PodSandboxId:1fc3944662b8d0b5fb57c838a2af035185febd102c2896bc7ff1caceb828d5cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706647449161787190,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db1a28e4-0c45-496e-a566-32a402b0841d,},Annotations:map[string]string{io.kubernetes.container.hash: fa069038,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe,PodSandboxId:6235f75afb8495e85b6e93de545aa4475234eb83a70af77b92651226eb347b33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706647448348849188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-59zvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6ef754-0898-4e1d-9ff2-9f42f456db6c,},Annotations:map[string]string{io.kubernetes.container.hash: fc0ce254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb,PodSandboxId:dd24181a872bf8b7293c77bb33bb2df2421b8c86da93296fb364d481237e104f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706647447880044260,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tlb8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547c1fe4-3ef7-421a-b460-660a05caa2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 9eba2324,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15,PodSandboxId:860cedfaac3b1a7a22c5dc5445248817e838010afbb5cb6d34ea13a10a944831,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706647424639917248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea2ea4b2c15f963
45c2278a0529553,},Annotations:map[string]string{io.kubernetes.container.hash: 567c6d13,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7,PodSandboxId:1415e35a0f476876c8b6cd2446b5b3163487b8d45e7328127bfdc64e5a3f2cf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706647424520860936,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28b9261c0610f04c
da0f868a5f8092d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed,PodSandboxId:c2eead1ebd494f3b848e4ef6632be9bd0f0f3a9be20fcfe4e306723f974fb1e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706647424083566300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1ec6e77489a4ee974a22d52af3263b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481,PodSandboxId:5a6ccefe9a301e15a8fba5ade40baa6df4de253a70755e287d136b7dd2197abb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706647423966462673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-877742,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9429139e233673bb34bc19e0a38b20e3,},Annotations:map[string]string{io.kubernetes.container.hash: 30868f22,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8d991b14-c632-4d1d-ae8b-172af7e83c20 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f3c5ab26cee1e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   1fc3944662b8d       storage-provisioner
	c9cf766ec1300       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   16 minutes ago      Running             kube-proxy                0                   6235f75afb849       kube-proxy-59zvd
	215f206f1db56       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 minutes ago      Running             coredns                   0                   dd24181a872bf       coredns-5dd5756b68-tlb8h
	1333c1b625367       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   860cedfaac3b1       etcd-default-k8s-diff-port-877742
	8d7e4979680f6       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   16 minutes ago      Running             kube-scheduler            2                   1415e35a0f476       kube-scheduler-default-k8s-diff-port-877742
	1e755138850bd       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   16 minutes ago      Running             kube-controller-manager   2                   c2eead1ebd494       kube-controller-manager-default-k8s-diff-port-877742
	39f0a670e5557       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   16 minutes ago      Running             kube-apiserver            2                   5a6ccefe9a301       kube-apiserver-default-k8s-diff-port-877742
	
	
	==> coredns [215f206f1db563b5f19b087eb38f25c5e43538f9be9a16f55aa0391fb14cd1cb] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-877742
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-877742
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218
	                    minikube.k8s.io/name=default-k8s-diff-port-877742
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T20_43_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 20:43:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-877742
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 21:00:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 20:59:30 +0000   Tue, 30 Jan 2024 20:43:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 20:59:30 +0000   Tue, 30 Jan 2024 20:43:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 20:59:30 +0000   Tue, 30 Jan 2024 20:43:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 20:59:30 +0000   Tue, 30 Jan 2024 20:44:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.52
	  Hostname:    default-k8s-diff-port-877742
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a199b9d1d72948d8b4e58b7190dc3388
	  System UUID:                a199b9d1-d729-48d8-b4e5-8b7190dc3388
	  Boot ID:                    c404b1f1-c695-4f25-ba15-6261ad204f6c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-tlb8h                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-877742                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-877742             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-877742    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-59zvd                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-877742             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-xjc2m                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-877742 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-877742 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node default-k8s-diff-port-877742 status is now: NodeHasSufficientPID
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-877742 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-877742 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-877742 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             16m                kubelet          Node default-k8s-diff-port-877742 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                16m                kubelet          Node default-k8s-diff-port-877742 status is now: NodeReady
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-877742 event: Registered Node default-k8s-diff-port-877742 in Controller
	
	
	==> dmesg <==
	[Jan30 20:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071777] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.505600] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.413006] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.141429] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.479128] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.573992] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.096543] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.132355] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.124552] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.279004] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[Jan30 20:39] systemd-fstab-generator[916]: Ignoring "noauto" for root device
	[ +22.299248] kauditd_printk_skb: 29 callbacks suppressed
	[Jan30 20:43] systemd-fstab-generator[3506]: Ignoring "noauto" for root device
	[  +9.285320] systemd-fstab-generator[3836]: Ignoring "noauto" for root device
	[Jan30 20:44] kauditd_printk_skb: 2 callbacks suppressed
	[Jan30 20:59] hrtimer: interrupt took 4780659 ns
	
	
	==> etcd [1333c1b62536717fd562e35fc59e5baa91ead7e62b446afb03325a6aa08f4c15] <==
	{"level":"info","ts":"2024-01-30T20:43:46.867799Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T20:43:46.867835Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b89c2645334f67c2","local-member-attributes":"{Name:default-k8s-diff-port-877742 ClientURLs:[https://192.168.72.52:2379]}","request-path":"/0/members/b89c2645334f67c2/attributes","cluster-id":"7062aa34dd277804","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-30T20:43:46.867865Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T20:43:46.868971Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.52:2379"}
	{"level":"info","ts":"2024-01-30T20:43:46.879573Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-30T20:43:46.879634Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-30T20:43:46.879772Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T20:43:46.880802Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-30T20:53:46.912215Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":710}
	{"level":"info","ts":"2024-01-30T20:53:46.914545Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":710,"took":"1.874676ms","hash":1666808174}
	{"level":"info","ts":"2024-01-30T20:53:46.914622Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1666808174,"revision":710,"compact-revision":-1}
	{"level":"info","ts":"2024-01-30T20:58:28.405574Z","caller":"traceutil/trace.go:171","msg":"trace[222358599] transaction","detail":"{read_only:false; response_revision:1182; number_of_response:1; }","duration":"127.90524ms","start":"2024-01-30T20:58:28.277624Z","end":"2024-01-30T20:58:28.405529Z","steps":["trace[222358599] 'process raft request'  (duration: 127.463465ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T20:58:46.92324Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":954}
	{"level":"info","ts":"2024-01-30T20:58:46.925518Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":954,"took":"1.864005ms","hash":3814587298}
	{"level":"info","ts":"2024-01-30T20:58:46.925592Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3814587298,"revision":954,"compact-revision":710}
	{"level":"info","ts":"2024-01-30T20:58:52.795495Z","caller":"traceutil/trace.go:171","msg":"trace[6662423] transaction","detail":"{read_only:false; response_revision:1203; number_of_response:1; }","duration":"240.579714ms","start":"2024-01-30T20:58:52.554899Z","end":"2024-01-30T20:58:52.795479Z","steps":["trace[6662423] 'process raft request'  (duration: 240.356641ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T20:58:57.082123Z","caller":"traceutil/trace.go:171","msg":"trace[1308662529] transaction","detail":"{read_only:false; response_revision:1206; number_of_response:1; }","duration":"264.159161ms","start":"2024-01-30T20:58:56.817936Z","end":"2024-01-30T20:58:57.082095Z","steps":["trace[1308662529] 'process raft request'  (duration: 263.780244ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T20:59:19.388568Z","caller":"traceutil/trace.go:171","msg":"trace[1412110369] transaction","detail":"{read_only:false; response_revision:1224; number_of_response:1; }","duration":"184.089812ms","start":"2024-01-30T20:59:19.204455Z","end":"2024-01-30T20:59:19.388545Z","steps":["trace[1412110369] 'process raft request'  (duration: 183.69372ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T20:59:20.650179Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.703224ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7476693758152913794 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.52\" mod_revision:1217 > success:<request_put:<key:\"/registry/masterleases/192.168.72.52\" value_size:66 lease:7476693758152913791 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.52\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-30T20:59:20.650314Z","caller":"traceutil/trace.go:171","msg":"trace[1717311968] transaction","detail":"{read_only:false; response_revision:1225; number_of_response:1; }","duration":"252.41543ms","start":"2024-01-30T20:59:20.397882Z","end":"2024-01-30T20:59:20.650297Z","steps":["trace[1717311968] 'process raft request'  (duration: 122.435801ms)","trace[1717311968] 'compare'  (duration: 128.555256ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-30T20:59:43.860778Z","caller":"traceutil/trace.go:171","msg":"trace[1565787855] linearizableReadLoop","detail":"{readStateIndex:1451; appliedIndex:1450; }","duration":"232.941993ms","start":"2024-01-30T20:59:43.627759Z","end":"2024-01-30T20:59:43.860701Z","steps":["trace[1565787855] 'read index received'  (duration: 232.741814ms)","trace[1565787855] 'applied index is now lower than readState.Index'  (duration: 199.482µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-30T20:59:43.861046Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.260127ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-01-30T20:59:43.861129Z","caller":"traceutil/trace.go:171","msg":"trace[1453516773] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:1245; }","duration":"233.373182ms","start":"2024-01-30T20:59:43.627734Z","end":"2024-01-30T20:59:43.861107Z","steps":["trace[1453516773] 'agreement among raft nodes before linearized reading'  (duration: 233.210609ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T20:59:43.861231Z","caller":"traceutil/trace.go:171","msg":"trace[622183680] transaction","detail":"{read_only:false; response_revision:1245; number_of_response:1; }","duration":"324.316165ms","start":"2024-01-30T20:59:43.536896Z","end":"2024-01-30T20:59:43.861212Z","steps":["trace[622183680] 'process raft request'  (duration: 323.658215ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T20:59:43.861367Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-30T20:59:43.536874Z","time spent":"324.422152ms","remote":"127.0.0.1:46302","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1243 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 21:00:12 up 21 min,  0 users,  load average: 0.31, 0.23, 0.23
	Linux default-k8s-diff-port-877742 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [39f0a670e55570a9b51d7367ef59db05811eb8682979dbe095f45e8275e9d481] <==
	I0130 20:56:49.547282       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:56:49.548169       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:56:49.548223       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 20:56:49.549466       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 20:57:48.381784       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0130 20:58:48.380306       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 20:58:48.553568       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:58:48.553857       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:58:48.554984       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 20:58:49.554344       1 handler_proxy.go:93] no RequestInfo found in the context
	W0130 20:58:49.554513       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:58:49.554577       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:58:49.554583       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0130 20:58:49.554608       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 20:58:49.555764       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 20:59:48.380627       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 20:59:49.555529       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:59:49.555779       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 20:59:49.555829       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 20:59:49.556063       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 20:59:49.556172       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 20:59:49.557361       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1e755138850bdbdbdaabd8288c18076b944e8aca49ac88a903c77e5517b6c8ed] <==
	I0130 20:54:35.183121       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:55:04.680280       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:55:05.193530       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0130 20:55:13.238512       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="976.483µs"
	I0130 20:55:24.237554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="457.999µs"
	E0130 20:55:34.685702       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:55:35.202334       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:56:04.694104       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:56:05.211713       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:56:34.699710       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:56:35.222742       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:57:04.706288       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:57:05.234278       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:57:34.711347       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:57:35.247824       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:58:04.720039       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:58:05.257955       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:58:34.728261       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:58:35.271874       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:59:04.735533       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:59:05.280943       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 20:59:34.743626       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 20:59:35.291523       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 21:00:04.754527       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 21:00:05.304318       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c9cf766ec1300b7389691574ae4425f65c12e7a4420474a87bc423ed03778cfe] <==
	I0130 20:44:08.907300       1 server_others.go:69] "Using iptables proxy"
	I0130 20:44:08.955957       1 node.go:141] Successfully retrieved node IP: 192.168.72.52
	I0130 20:44:09.031474       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0130 20:44:09.031521       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0130 20:44:09.041602       1 server_others.go:152] "Using iptables Proxier"
	I0130 20:44:09.041979       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0130 20:44:09.042677       1 server.go:846] "Version info" version="v1.28.4"
	I0130 20:44:09.042740       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 20:44:09.044891       1 config.go:188] "Starting service config controller"
	I0130 20:44:09.045813       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0130 20:44:09.045997       1 config.go:315] "Starting node config controller"
	I0130 20:44:09.046037       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0130 20:44:09.048191       1 config.go:97] "Starting endpoint slice config controller"
	I0130 20:44:09.048235       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0130 20:44:09.146597       1 shared_informer.go:318] Caches are synced for node config
	I0130 20:44:09.146736       1 shared_informer.go:318] Caches are synced for service config
	I0130 20:44:09.149059       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8d7e4979680f685a9237fa9b0b98fca2b03c3b22c86f774beb11a61c94a5cde7] <==
	W0130 20:43:48.556686       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 20:43:48.556838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0130 20:43:48.557152       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0130 20:43:48.557196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0130 20:43:48.557338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0130 20:43:48.557350       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0130 20:43:48.557458       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 20:43:48.557470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0130 20:43:49.371093       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0130 20:43:49.371691       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0130 20:43:49.373331       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0130 20:43:49.373495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0130 20:43:49.401521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0130 20:43:49.401648       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0130 20:43:49.427513       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0130 20:43:49.427606       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 20:43:49.474725       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0130 20:43:49.474748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0130 20:43:49.491589       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0130 20:43:49.491652       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0130 20:43:49.579646       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0130 20:43:49.579831       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0130 20:43:49.793672       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0130 20:43:49.793764       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0130 20:43:52.645477       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 20:38:35 UTC, ends at Tue 2024-01-30 21:00:13 UTC. --
	Jan 30 20:57:52 default-k8s-diff-port-877742 kubelet[3843]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 20:57:52 default-k8s-diff-port-877742 kubelet[3843]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:57:52 default-k8s-diff-port-877742 kubelet[3843]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 20:58:04 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:58:04.221257    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:58:19 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:58:19.216804    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:58:30 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:58:30.215920    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:58:41 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:58:41.216951    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:58:52 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:58:52.270639    3843 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 20:58:52 default-k8s-diff-port-877742 kubelet[3843]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 20:58:52 default-k8s-diff-port-877742 kubelet[3843]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:58:52 default-k8s-diff-port-877742 kubelet[3843]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 20:58:52 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:58:52.499132    3843 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jan 30 20:58:56 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:58:56.217702    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:59:09 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:59:09.216577    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:59:24 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:59:24.217361    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:59:36 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:59:36.216508    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:59:48 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:59:48.218537    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	Jan 30 20:59:52 default-k8s-diff-port-877742 kubelet[3843]: E0130 20:59:52.272983    3843 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 20:59:52 default-k8s-diff-port-877742 kubelet[3843]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 20:59:52 default-k8s-diff-port-877742 kubelet[3843]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 20:59:52 default-k8s-diff-port-877742 kubelet[3843]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 21:00:03 default-k8s-diff-port-877742 kubelet[3843]: E0130 21:00:03.233995    3843 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 30 21:00:03 default-k8s-diff-port-877742 kubelet[3843]: E0130 21:00:03.234053    3843 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 30 21:00:03 default-k8s-diff-port-877742 kubelet[3843]: E0130 21:00:03.234297    3843 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hkp9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-xjc2m_kube-system(7b9a273b-d328-4ae8-925e-5bb305cfe574): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 30 21:00:03 default-k8s-diff-port-877742 kubelet[3843]: E0130 21:00:03.234340    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-xjc2m" podUID="7b9a273b-d328-4ae8-925e-5bb305cfe574"
	
	
	==> storage-provisioner [f3c5ab26cee1eb40d1ac8e976403a8fb0854477ee6a3697ac9696dcd46dcfd06] <==
	I0130 20:44:09.272180       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 20:44:09.289039       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 20:44:09.289242       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 20:44:09.299966       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 20:44:09.300219       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-877742_a7f17109-70f2-469e-89f8-8a72dd6e5923!
	I0130 20:44:09.300929       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6058efe4-4925-4878-86f5-a6ec8615d032", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-877742_a7f17109-70f2-469e-89f8-8a72dd6e5923 became leader
	I0130 20:44:09.401549       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-877742_a7f17109-70f2-469e-89f8-8a72dd6e5923!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-877742 -n default-k8s-diff-port-877742
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-877742 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-xjc2m
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-877742 describe pod metrics-server-57f55c9bc5-xjc2m
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-877742 describe pod metrics-server-57f55c9bc5-xjc2m: exit status 1 (69.315508ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-xjc2m" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-877742 describe pod metrics-server-57f55c9bc5-xjc2m: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (168.46s)

                                                
                                    

Test pass (242/310)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 51.04
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
9 TestDownloadOnly/v1.16.0/DeleteAll 0.14
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 42.86
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.14
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.29.0-rc.2/json-events 44.33
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.58
31 TestOffline 137.18
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 216.44
38 TestAddons/parallel/Registry 20.36
40 TestAddons/parallel/InspektorGadget 11.06
41 TestAddons/parallel/MetricsServer 7.14
42 TestAddons/parallel/HelmTiller 27.26
44 TestAddons/parallel/CSI 42.63
45 TestAddons/parallel/Headlamp 16.44
46 TestAddons/parallel/CloudSpanner 7.43
47 TestAddons/parallel/LocalPath 20.27
48 TestAddons/parallel/NvidiaDevicePlugin 6.64
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.11
54 TestCertOptions 72.55
55 TestCertExpiration 273.26
57 TestForceSystemdFlag 122.5
58 TestForceSystemdEnv 103.8
60 TestKVMDriverInstallOrUpdate 5.11
64 TestErrorSpam/setup 50.36
65 TestErrorSpam/start 0.37
66 TestErrorSpam/status 0.75
67 TestErrorSpam/pause 1.55
68 TestErrorSpam/unpause 1.73
69 TestErrorSpam/stop 2.25
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 64.52
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 37.69
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.09
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 41.06
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.51
92 TestFunctional/serial/LogsFileCmd 1.51
93 TestFunctional/serial/InvalidService 4.38
95 TestFunctional/parallel/ConfigCmd 0.4
96 TestFunctional/parallel/DashboardCmd 32.8
97 TestFunctional/parallel/DryRun 0.3
98 TestFunctional/parallel/InternationalLanguage 0.16
99 TestFunctional/parallel/StatusCmd 1.02
103 TestFunctional/parallel/ServiceCmdConnect 12.62
104 TestFunctional/parallel/AddonsCmd 0.15
105 TestFunctional/parallel/PersistentVolumeClaim 55.9
107 TestFunctional/parallel/SSHCmd 0.45
108 TestFunctional/parallel/CpCmd 1.41
109 TestFunctional/parallel/MySQL 29.06
110 TestFunctional/parallel/FileSync 0.21
111 TestFunctional/parallel/CertSync 1.38
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.49
119 TestFunctional/parallel/License 0.62
120 TestFunctional/parallel/Version/short 0.06
121 TestFunctional/parallel/Version/components 0.48
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
125 TestFunctional/parallel/ServiceCmd/DeployApp 13.21
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
136 TestFunctional/parallel/ProfileCmd/profile_list 0.33
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
138 TestFunctional/parallel/MountCmd/any-port 9.48
139 TestFunctional/parallel/MountCmd/specific-port 2.15
140 TestFunctional/parallel/ServiceCmd/List 0.37
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.39
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
143 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
144 TestFunctional/parallel/ImageCommands/ImageListTable 0.36
145 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
146 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
147 TestFunctional/parallel/ImageCommands/ImageBuild 13.15
148 TestFunctional/parallel/ImageCommands/Setup 2.04
149 TestFunctional/parallel/ServiceCmd/Format 0.4
150 TestFunctional/parallel/ServiceCmd/URL 0.38
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.57
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestIngressAddonLegacy/StartLegacyK8sCluster 128.32
167 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 16.94
168 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.56
172 TestJSONOutput/start/Command 98.02
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.67
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.65
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 7.1
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.21
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 98.48
204 TestMountStart/serial/StartWithMountFirst 28.37
205 TestMountStart/serial/VerifyMountFirst 0.4
206 TestMountStart/serial/StartWithMountSecond 26.71
207 TestMountStart/serial/VerifyMountSecond 0.4
208 TestMountStart/serial/DeleteFirst 0.65
209 TestMountStart/serial/VerifyMountPostDelete 0.39
210 TestMountStart/serial/Stop 1.21
211 TestMountStart/serial/RestartStopped 22.14
212 TestMountStart/serial/VerifyMountPostStop 0.4
215 TestMultiNode/serial/FreshStart2Nodes 111.55
216 TestMultiNode/serial/DeployApp2Nodes 6.45
217 TestMultiNode/serial/PingHostFrom2Pods 0.95
218 TestMultiNode/serial/AddNode 46.04
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.21
221 TestMultiNode/serial/CopyFile 7.53
222 TestMultiNode/serial/StopNode 2.3
223 TestMultiNode/serial/StartAfterStop 31.62
225 TestMultiNode/serial/DeleteNode 1.78
227 TestMultiNode/serial/RestartMultiNode 446.76
228 TestMultiNode/serial/ValidateNameConflict 48.51
235 TestScheduledStopUnix 118.72
239 TestRunningBinaryUpgrade 158.74
241 TestKubernetesUpgrade 215.06
257 TestNetworkPlugins/group/false 4.25
261 TestStoppedBinaryUpgrade/Setup 2.48
262 TestStoppedBinaryUpgrade/Upgrade 185.01
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
265 TestNoKubernetes/serial/StartWithK8s 78.31
266 TestNoKubernetes/serial/StartWithStopK8s 10.96
267 TestNoKubernetes/serial/Start 29.11
268 TestStoppedBinaryUpgrade/MinikubeLogs 0.98
270 TestPause/serial/Start 108.53
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
272 TestNoKubernetes/serial/ProfileList 29.15
273 TestNoKubernetes/serial/Stop 2.45
274 TestNoKubernetes/serial/StartNoArgs 29.21
276 TestStartStop/group/old-k8s-version/serial/FirstStart 344.8
278 TestStartStop/group/no-preload/serial/FirstStart 163.07
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
281 TestStartStop/group/embed-certs/serial/FirstStart 161.59
282 TestPause/serial/SecondStartNoReconfiguration 80.43
283 TestPause/serial/Pause 0.74
284 TestPause/serial/VerifyStatus 0.28
285 TestPause/serial/Unpause 0.69
286 TestPause/serial/PauseAgain 0.94
287 TestPause/serial/DeletePaused 1.05
288 TestPause/serial/VerifyDeletedResources 0.75
290 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 101.03
291 TestStartStop/group/no-preload/serial/DeployApp 14.37
292 TestStartStop/group/embed-certs/serial/DeployApp 10.32
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
295 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.2
297 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.31
298 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.18
302 TestStartStop/group/no-preload/serial/SecondStart 670.11
303 TestStartStop/group/old-k8s-version/serial/DeployApp 10.4
304 TestStartStop/group/embed-certs/serial/SecondStart 582.72
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.83
308 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 826.72
310 TestStartStop/group/old-k8s-version/serial/SecondStart 614.37
320 TestStartStop/group/newest-cni/serial/FirstStart 59.96
321 TestNetworkPlugins/group/auto/Start 124.07
322 TestNetworkPlugins/group/kindnet/Start 103.85
323 TestStartStop/group/newest-cni/serial/DeployApp 0
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.41
325 TestStartStop/group/newest-cni/serial/Stop 11.13
326 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
327 TestStartStop/group/newest-cni/serial/SecondStart 57.17
328 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
329 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
330 TestNetworkPlugins/group/kindnet/NetCatPod 15.3
331 TestNetworkPlugins/group/auto/KubeletFlags 0.23
332 TestNetworkPlugins/group/auto/NetCatPod 13.32
333 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
336 TestStartStop/group/newest-cni/serial/Pause 3.25
337 TestNetworkPlugins/group/calico/Start 98.58
338 TestNetworkPlugins/group/custom-flannel/Start 114.54
339 TestNetworkPlugins/group/auto/DNS 0.17
340 TestNetworkPlugins/group/auto/Localhost 0.16
341 TestNetworkPlugins/group/auto/HairPin 0.16
342 TestNetworkPlugins/group/kindnet/DNS 0.2
343 TestNetworkPlugins/group/kindnet/Localhost 0.14
344 TestNetworkPlugins/group/kindnet/HairPin 0.15
345 TestNetworkPlugins/group/enable-default-cni/Start 137.94
346 TestNetworkPlugins/group/flannel/Start 152.78
347 TestNetworkPlugins/group/calico/ControllerPod 6.01
348 TestNetworkPlugins/group/calico/KubeletFlags 0.26
349 TestNetworkPlugins/group/calico/NetCatPod 11.23
350 TestNetworkPlugins/group/calico/DNS 0.4
351 TestNetworkPlugins/group/calico/Localhost 0.37
352 TestNetworkPlugins/group/calico/HairPin 0.23
353 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
354 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.34
355 TestNetworkPlugins/group/custom-flannel/DNS 0.21
356 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
357 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
358 TestNetworkPlugins/group/bridge/Start 104.57
359 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
360 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.46
361 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
362 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
363 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
364 TestNetworkPlugins/group/flannel/ControllerPod 6.01
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
366 TestNetworkPlugins/group/flannel/NetCatPod 12.3
367 TestNetworkPlugins/group/flannel/DNS 0.2
368 TestNetworkPlugins/group/flannel/Localhost 0.13
369 TestNetworkPlugins/group/flannel/HairPin 0.14
370 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
371 TestNetworkPlugins/group/bridge/NetCatPod 10.21
372 TestNetworkPlugins/group/bridge/DNS 0.16
373 TestNetworkPlugins/group/bridge/Localhost 0.15
374 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.16.0/json-events (51.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-311980 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-311980 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (51.03886654s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (51.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-311980
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-311980: exit status 85 (69.425021ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-311980 | jenkins | v1.32.0 | 30 Jan 24 19:22 UTC |          |
	|         | -p download-only-311980        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 19:22:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 19:22:43.020274   11679 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:22:43.020397   11679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:22:43.020407   11679 out.go:309] Setting ErrFile to fd 2...
	I0130 19:22:43.020421   11679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:22:43.020622   11679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	W0130 19:22:43.020744   11679 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18007-4458/.minikube/config/config.json: open /home/jenkins/minikube-integration/18007-4458/.minikube/config/config.json: no such file or directory
	I0130 19:22:43.021309   11679 out.go:303] Setting JSON to true
	I0130 19:22:43.022135   11679 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":308,"bootTime":1706642255,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 19:22:43.022185   11679 start.go:138] virtualization: kvm guest
	I0130 19:22:43.024574   11679 out.go:97] [download-only-311980] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 19:22:43.025961   11679 out.go:169] MINIKUBE_LOCATION=18007
	W0130 19:22:43.024671   11679 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball: no such file or directory
	I0130 19:22:43.024710   11679 notify.go:220] Checking for updates...
	I0130 19:22:43.028216   11679 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 19:22:43.029494   11679 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 19:22:43.030632   11679 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 19:22:43.031827   11679 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0130 19:22:43.033864   11679 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0130 19:22:43.034050   11679 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 19:22:43.499981   11679 out.go:97] Using the kvm2 driver based on user configuration
	I0130 19:22:43.500004   11679 start.go:298] selected driver: kvm2
	I0130 19:22:43.500009   11679 start.go:902] validating driver "kvm2" against <nil>
	I0130 19:22:43.500351   11679 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:22:43.500477   11679 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18007-4458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 19:22:43.514076   11679 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 19:22:43.514161   11679 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0130 19:22:43.514838   11679 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0130 19:22:43.515021   11679 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0130 19:22:43.515077   11679 cni.go:84] Creating CNI manager for ""
	I0130 19:22:43.515094   11679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 19:22:43.515109   11679 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0130 19:22:43.515121   11679 start_flags.go:321] config:
	{Name:download-only-311980 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-311980 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 19:22:43.515380   11679 iso.go:125] acquiring lock: {Name:mk072ab123730f3058e85a91672f85e887bd47af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:22:43.517355   11679 out.go:97] Downloading VM boot image ...
	I0130 19:22:43.517384   11679 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18007-4458/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0130 19:22:52.638155   11679 out.go:97] Starting control plane node download-only-311980 in cluster download-only-311980
	I0130 19:22:52.638180   11679 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 19:22:52.744954   11679 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0130 19:22:52.744983   11679 cache.go:56] Caching tarball of preloaded images
	I0130 19:22:52.745140   11679 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 19:22:52.747609   11679 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0130 19:22:52.747625   11679 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0130 19:22:52.863341   11679 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0130 19:23:10.831457   11679 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0130 19:23:10.831542   11679 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0130 19:23:11.725237   11679 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0130 19:23:11.726001   11679 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/download-only-311980/config.json ...
	I0130 19:23:11.726032   11679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/download-only-311980/config.json: {Name:mk4cba698c4971d86a6d57dad5d57c68cc6e73b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:23:11.726176   11679 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 19:23:11.726333   11679 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/18007-4458/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-311980"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-311980
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (42.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-119193 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-119193 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (42.862593571s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (42.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-119193
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-119193: exit status 85 (72.058771ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-311980 | jenkins | v1.32.0 | 30 Jan 24 19:22 UTC |                     |
	|         | -p download-only-311980        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 30 Jan 24 19:23 UTC | 30 Jan 24 19:23 UTC |
	| delete  | -p download-only-311980        | download-only-311980 | jenkins | v1.32.0 | 30 Jan 24 19:23 UTC | 30 Jan 24 19:23 UTC |
	| start   | -o=json --download-only        | download-only-119193 | jenkins | v1.32.0 | 30 Jan 24 19:23 UTC |                     |
	|         | -p download-only-119193        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 19:23:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 19:23:34.401455   11966 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:23:34.401718   11966 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:23:34.401729   11966 out.go:309] Setting ErrFile to fd 2...
	I0130 19:23:34.401733   11966 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:23:34.401913   11966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 19:23:34.402473   11966 out.go:303] Setting JSON to true
	I0130 19:23:34.403339   11966 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":360,"bootTime":1706642255,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 19:23:34.403399   11966 start.go:138] virtualization: kvm guest
	I0130 19:23:34.405792   11966 out.go:97] [download-only-119193] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 19:23:34.405980   11966 notify.go:220] Checking for updates...
	I0130 19:23:34.407321   11966 out.go:169] MINIKUBE_LOCATION=18007
	I0130 19:23:34.409014   11966 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 19:23:34.410471   11966 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 19:23:34.411906   11966 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 19:23:34.413187   11966 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0130 19:23:34.415482   11966 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0130 19:23:34.415679   11966 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 19:23:34.446902   11966 out.go:97] Using the kvm2 driver based on user configuration
	I0130 19:23:34.446921   11966 start.go:298] selected driver: kvm2
	I0130 19:23:34.446926   11966 start.go:902] validating driver "kvm2" against <nil>
	I0130 19:23:34.447200   11966 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:23:34.447290   11966 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18007-4458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 19:23:34.461136   11966 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 19:23:34.461194   11966 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0130 19:23:34.461655   11966 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0130 19:23:34.461829   11966 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0130 19:23:34.461902   11966 cni.go:84] Creating CNI manager for ""
	I0130 19:23:34.461917   11966 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 19:23:34.461930   11966 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0130 19:23:34.461942   11966 start_flags.go:321] config:
	{Name:download-only-119193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-119193 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 19:23:34.462100   11966 iso.go:125] acquiring lock: {Name:mk072ab123730f3058e85a91672f85e887bd47af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:23:34.463952   11966 out.go:97] Starting control plane node download-only-119193 in cluster download-only-119193
	I0130 19:23:34.463965   11966 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 19:23:34.564016   11966 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0130 19:23:34.564049   11966 cache.go:56] Caching tarball of preloaded images
	I0130 19:23:34.564212   11966 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 19:23:34.566298   11966 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0130 19:23:34.566315   11966 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0130 19:23:34.679376   11966 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0130 19:23:48.321034   11966 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0130 19:23:48.321115   11966 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0130 19:23:49.250752   11966 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0130 19:23:49.251106   11966 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/download-only-119193/config.json ...
	I0130 19:23:49.251144   11966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/download-only-119193/config.json: {Name:mkd7597a7be1c1c3c22dfb564b85f6248c70c34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:23:49.251345   11966 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 19:23:49.251513   11966 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18007-4458/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-119193"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-119193
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (44.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-361110 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-361110 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (44.329932242s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (44.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-361110
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-361110: exit status 85 (71.882605ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-311980 | jenkins | v1.32.0 | 30 Jan 24 19:22 UTC |                     |
	|         | -p download-only-311980           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 30 Jan 24 19:23 UTC | 30 Jan 24 19:23 UTC |
	| delete  | -p download-only-311980           | download-only-311980 | jenkins | v1.32.0 | 30 Jan 24 19:23 UTC | 30 Jan 24 19:23 UTC |
	| start   | -o=json --download-only           | download-only-119193 | jenkins | v1.32.0 | 30 Jan 24 19:23 UTC |                     |
	|         | -p download-only-119193           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 30 Jan 24 19:24 UTC | 30 Jan 24 19:24 UTC |
	| delete  | -p download-only-119193           | download-only-119193 | jenkins | v1.32.0 | 30 Jan 24 19:24 UTC | 30 Jan 24 19:24 UTC |
	| start   | -o=json --download-only           | download-only-361110 | jenkins | v1.32.0 | 30 Jan 24 19:24 UTC |                     |
	|         | -p download-only-361110           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 19:24:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 19:24:17.598760   12211 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:24:17.598935   12211 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:24:17.598945   12211 out.go:309] Setting ErrFile to fd 2...
	I0130 19:24:17.598952   12211 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:24:17.599151   12211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 19:24:17.599728   12211 out.go:303] Setting JSON to true
	I0130 19:24:17.600530   12211 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":403,"bootTime":1706642255,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 19:24:17.600588   12211 start.go:138] virtualization: kvm guest
	I0130 19:24:17.602870   12211 out.go:97] [download-only-361110] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 19:24:17.604633   12211 out.go:169] MINIKUBE_LOCATION=18007
	I0130 19:24:17.603024   12211 notify.go:220] Checking for updates...
	I0130 19:24:17.607733   12211 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 19:24:17.609138   12211 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 19:24:17.610499   12211 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 19:24:17.611975   12211 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0130 19:24:17.614656   12211 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0130 19:24:17.614867   12211 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 19:24:17.645442   12211 out.go:97] Using the kvm2 driver based on user configuration
	I0130 19:24:17.645461   12211 start.go:298] selected driver: kvm2
	I0130 19:24:17.645466   12211 start.go:902] validating driver "kvm2" against <nil>
	I0130 19:24:17.645759   12211 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:24:17.645832   12211 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18007-4458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 19:24:17.659329   12211 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 19:24:17.659367   12211 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0130 19:24:17.659797   12211 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0130 19:24:17.659928   12211 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0130 19:24:17.659992   12211 cni.go:84] Creating CNI manager for ""
	I0130 19:24:17.660004   12211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 19:24:17.660014   12211 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0130 19:24:17.660023   12211 start_flags.go:321] config:
	{Name:download-only-361110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-361110 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 19:24:17.660126   12211 iso.go:125] acquiring lock: {Name:mk072ab123730f3058e85a91672f85e887bd47af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:24:17.661884   12211 out.go:97] Starting control plane node download-only-361110 in cluster download-only-361110
	I0130 19:24:17.661899   12211 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 19:24:17.767906   12211 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0130 19:24:17.767936   12211 cache.go:56] Caching tarball of preloaded images
	I0130 19:24:17.768114   12211 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 19:24:17.769909   12211 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0130 19:24:17.769921   12211 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0130 19:24:17.880237   12211 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9e0f57288adacc30aad3ff7e72a8dc68 -> /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0130 19:24:30.811169   12211 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0130 19:24:30.811300   12211 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18007-4458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0130 19:24:31.625627   12211 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0130 19:24:31.625942   12211 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/download-only-361110/config.json ...
	I0130 19:24:31.625969   12211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/download-only-361110/config.json: {Name:mk0c9b82ca2e7213bad81b2d09702bc301eacc46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:24:31.626113   12211 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 19:24:31.626250   12211 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18007-4458/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-361110"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-361110
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-533773 --alsologtostderr --binary-mirror http://127.0.0.1:38167 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-533773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-533773
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (137.18s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-869267 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-869267 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m16.050798372s)
helpers_test.go:175: Cleaning up "offline-crio-869267" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-869267
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-869267: (1.132644034s)
--- PASS: TestOffline (137.18s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-663262
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-663262: exit status 85 (60.461988ms)

                                                
                                                
-- stdout --
	* Profile "addons-663262" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-663262"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-663262
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-663262: exit status 85 (61.826996ms)

                                                
                                                
-- stdout --
	* Profile "addons-663262" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-663262"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (216.44s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-663262 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-663262 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m36.443143079s)
--- PASS: TestAddons/Setup (216.44s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 56.903723ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-w2wdf" [cbacb56e-d023-4053-959e-f949629b5e23] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.037739269s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-n8wz9" [fff0fc97-43df-44b0-b675-f7fab6617f6b] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005980967s
addons_test.go:340: (dbg) Run:  kubectl --context addons-663262 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-663262 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-663262 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.141661378s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-663262 ip
2024/01/30 19:28:59 [DEBUG] GET http://192.168.39.252:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-663262 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.36s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.06s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bbp7z" [c8308aac-e07f-4215-a0da-bde25fac4527] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00582604s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-663262
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-663262: (6.053741856s)
--- PASS: TestAddons/parallel/InspektorGadget (11.06s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.14s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 6.396575ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-nxh8w" [3c2117ed-7cab-4d9a-8960-57004b317d18] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006665393s
addons_test.go:415: (dbg) Run:  kubectl --context addons-663262 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-663262 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-663262 addons disable metrics-server --alsologtostderr -v=1: (1.064903906s)
--- PASS: TestAddons/parallel/MetricsServer (7.14s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (27.26s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 56.958413ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-dffh2" [ea9293b4-e84a-4770-8323-32899c9e383c] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.011037425s
addons_test.go:473: (dbg) Run:  kubectl --context addons-663262 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-663262 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (11.272152393s)
addons_test.go:478: kubectl --context addons-663262 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:473: (dbg) Run:  kubectl --context addons-663262 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-663262 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.757956194s)
addons_test.go:478: kubectl --context addons-663262 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:473: (dbg) Run:  kubectl --context addons-663262 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-663262 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.102546688s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-663262 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (27.26s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 6.819989ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-663262 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663262 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-663262 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c5a29fdc-4bc7-450f-ab79-3253b874bb19] Pending
helpers_test.go:344: "task-pv-pod" [c5a29fdc-4bc7-450f-ab79-3253b874bb19] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c5a29fdc-4bc7-450f-ab79-3253b874bb19] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004775292s
addons_test.go:584: (dbg) Run:  kubectl --context addons-663262 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-663262 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-663262 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-663262 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-663262 delete pod task-pv-pod: (1.193166985s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-663262 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-663262 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-663262 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0948dcd8-9792-480a-8013-383a6fec4594] Pending
helpers_test.go:344: "task-pv-pod-restore" [0948dcd8-9792-480a-8013-383a6fec4594] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0948dcd8-9792-480a-8013-383a6fec4594] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003813801s
addons_test.go:626: (dbg) Run:  kubectl --context addons-663262 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-663262 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-663262 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-663262 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-663262 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.770886068s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-663262 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (42.63s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-663262 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-663262 --alsologtostderr -v=1: (1.433361537s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-n64s5" [2fcc34b1-9bbf-4735-9978-febdfce4af37] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-n64s5" [2fcc34b1-9bbf-4735-9978-febdfce4af37] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-n64s5" [2fcc34b1-9bbf-4735-9978-febdfce4af37] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.006624984s
--- PASS: TestAddons/parallel/Headlamp (16.44s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.43s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-wpjfh" [c415d239-132f-432b-832b-36ab8cd4cd6d] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.011321913s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-663262
addons_test.go:860: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-663262: (1.354657744s)
--- PASS: TestAddons/parallel/CloudSpanner (7.43s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (20.27s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-663262 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-663262 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663262 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663262 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663262 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663262 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663262 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663262 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663262 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663262 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1991cd3e-18ed-4c6d-bf0d-9c3b5935fe2e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1991cd3e-18ed-4c6d-bf0d-9c3b5935fe2e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1991cd3e-18ed-4c6d-bf0d-9c3b5935fe2e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 12.004657557s
addons_test.go:891: (dbg) Run:  kubectl --context addons-663262 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-663262 ssh "cat /opt/local-path-provisioner/pvc-47ecd82a-1437-4c50-a51d-f453d83df9f5_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-663262 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-663262 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-663262 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (20.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-wfrjk" [fad394cb-bffb-41c2-825e-f94efd52f7c8] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008728334s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-663262
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.64s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-6vskh" [f2a8b988-80f1-491d-8d0c-f9d7d229fc3c] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004290938s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-663262 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-663262 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (72.55s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-569480 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-569480 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m10.94905722s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-569480 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-569480 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-569480 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-569480" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-569480
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-569480: (1.024618746s)
--- PASS: TestCertOptions (72.55s)

                                                
                                    
x
+
TestCertExpiration (273.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-565458 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0130 20:23:39.710655   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-565458 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m11.937632902s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-565458 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-565458 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (20.290934098s)
helpers_test.go:175: Cleaning up "cert-expiration-565458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-565458
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-565458: (1.031198538s)
--- PASS: TestCertExpiration (273.26s)

                                                
                                    
x
+
TestForceSystemdFlag (122.5s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-682533 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-682533 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (2m0.936280181s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-682533 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-682533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-682533
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-682533: (1.341093722s)
--- PASS: TestForceSystemdFlag (122.50s)

                                                
                                    
x
+
TestForceSystemdEnv (103.8s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-353035 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-353035 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m42.77558481s)
helpers_test.go:175: Cleaning up "force-systemd-env-353035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-353035
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-353035: (1.028354099s)
--- PASS: TestForceSystemdEnv (103.80s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.11s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.11s)

                                                
                                    
x
+
TestErrorSpam/setup (50.36s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-518905 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-518905 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-518905 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-518905 --driver=kvm2  --container-runtime=crio: (50.358262509s)
--- PASS: TestErrorSpam/setup (50.36s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-518905 --log_dir /tmp/nospam-518905 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-518905 --log_dir /tmp/nospam-518905 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-518905 --log_dir /tmp/nospam-518905 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-518905 --log_dir /tmp/nospam-518905 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-518905 --log_dir /tmp/nospam-518905 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-518905 --log_dir /tmp/nospam-518905 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-518905 --log_dir /tmp/nospam-518905 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-518905 --log_dir /tmp/nospam-518905 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-518905 --log_dir /tmp/nospam-518905 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-518905 --log_dir /tmp/nospam-518905 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-518905 --log_dir /tmp/nospam-518905 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-518905 --log_dir /tmp/nospam-518905 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (2.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-518905 --log_dir /tmp/nospam-518905 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-518905 --log_dir /tmp/nospam-518905 stop: (2.088141052s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-518905 --log_dir /tmp/nospam-518905 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-518905 --log_dir /tmp/nospam-518905 stop
--- PASS: TestErrorSpam/stop (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18007-4458/.minikube/files/etc/test/nested/copy/11667/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (64.52s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-741304 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-741304 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m4.521211012s)
--- PASS: TestFunctional/serial/StartWithProxy (64.52s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.69s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-741304 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-741304 --alsologtostderr -v=8: (37.687964885s)
functional_test.go:659: soft start took 37.688576342s for "functional-741304" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.69s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-741304 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-741304 cache add registry.k8s.io/pause:3.3: (1.099740397s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-741304 cache add registry.k8s.io/pause:latest: (1.013575876s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741304 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (227.016487ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 kubectl -- --context functional-741304 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-741304 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-741304 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-741304 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.057157231s)
functional_test.go:757: restart took 41.057287031s for "functional-741304" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.06s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-741304 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-741304 logs: (1.513137576s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 logs --file /tmp/TestFunctionalserialLogsFileCmd3866191506/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-741304 logs --file /tmp/TestFunctionalserialLogsFileCmd3866191506/001/logs.txt: (1.512876199s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.38s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-741304 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-741304
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-741304: exit status 115 (303.47437ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.230:31848 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-741304 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741304 config get cpus: exit status 14 (60.655084ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741304 config get cpus: exit status 14 (58.318877ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (32.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-741304 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-741304 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 19899: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (32.80s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-741304 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-741304 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (139.190748ms)

                                                
                                                
-- stdout --
	* [functional-741304] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18007
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:38:21.317516   19292 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:38:21.317657   19292 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:21.317670   19292 out.go:309] Setting ErrFile to fd 2...
	I0130 19:38:21.317677   19292 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:21.317870   19292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 19:38:21.318399   19292 out.go:303] Setting JSON to false
	I0130 19:38:21.319216   19292 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1247,"bootTime":1706642255,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 19:38:21.319306   19292 start.go:138] virtualization: kvm guest
	I0130 19:38:21.321543   19292 out.go:177] * [functional-741304] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 19:38:21.323217   19292 out.go:177]   - MINIKUBE_LOCATION=18007
	I0130 19:38:21.323251   19292 notify.go:220] Checking for updates...
	I0130 19:38:21.324399   19292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 19:38:21.325559   19292 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 19:38:21.326701   19292 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 19:38:21.327816   19292 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 19:38:21.329023   19292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 19:38:21.330905   19292 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:38:21.331526   19292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:38:21.331578   19292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:38:21.345979   19292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45065
	I0130 19:38:21.346310   19292 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:38:21.346821   19292 main.go:141] libmachine: Using API Version  1
	I0130 19:38:21.346847   19292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:38:21.347184   19292 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:38:21.347356   19292 main.go:141] libmachine: (functional-741304) Calling .DriverName
	I0130 19:38:21.347584   19292 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 19:38:21.347845   19292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:38:21.347884   19292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:38:21.361654   19292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40109
	I0130 19:38:21.362079   19292 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:38:21.362520   19292 main.go:141] libmachine: Using API Version  1
	I0130 19:38:21.362545   19292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:38:21.362981   19292 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:38:21.363165   19292 main.go:141] libmachine: (functional-741304) Calling .DriverName
	I0130 19:38:21.395816   19292 out.go:177] * Using the kvm2 driver based on existing profile
	I0130 19:38:21.397097   19292 start.go:298] selected driver: kvm2
	I0130 19:38:21.397114   19292 start.go:902] validating driver "kvm2" against &{Name:functional-741304 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-741304 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.230 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 19:38:21.397214   19292 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 19:38:21.399166   19292 out.go:177] 
	W0130 19:38:21.400341   19292 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0130 19:38:21.401542   19292 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-741304 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-741304 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-741304 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (163.39795ms)

                                                
                                                
-- stdout --
	* [functional-741304] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18007
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:38:21.632135   19363 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:38:21.632265   19363 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:21.632274   19363 out.go:309] Setting ErrFile to fd 2...
	I0130 19:38:21.632279   19363 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:38:21.632616   19363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 19:38:21.633157   19363 out.go:303] Setting JSON to false
	I0130 19:38:21.634041   19363 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1247,"bootTime":1706642255,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 19:38:21.634101   19363 start.go:138] virtualization: kvm guest
	I0130 19:38:21.636440   19363 out.go:177] * [functional-741304] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0130 19:38:21.637866   19363 out.go:177]   - MINIKUBE_LOCATION=18007
	I0130 19:38:21.637816   19363 notify.go:220] Checking for updates...
	I0130 19:38:21.639147   19363 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 19:38:21.640825   19363 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 19:38:21.642013   19363 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 19:38:21.643291   19363 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 19:38:21.644642   19363 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 19:38:21.646408   19363 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:38:21.647186   19363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:38:21.647230   19363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:38:21.669116   19363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0130 19:38:21.669546   19363 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:38:21.670094   19363 main.go:141] libmachine: Using API Version  1
	I0130 19:38:21.670120   19363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:38:21.670468   19363 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:38:21.670617   19363 main.go:141] libmachine: (functional-741304) Calling .DriverName
	I0130 19:38:21.670830   19363 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 19:38:21.671139   19363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:38:21.671217   19363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:38:21.685029   19363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0130 19:38:21.685371   19363 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:38:21.685845   19363 main.go:141] libmachine: Using API Version  1
	I0130 19:38:21.685864   19363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:38:21.686268   19363 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:38:21.686410   19363 main.go:141] libmachine: (functional-741304) Calling .DriverName
	I0130 19:38:21.718213   19363 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0130 19:38:21.719597   19363 start.go:298] selected driver: kvm2
	I0130 19:38:21.719612   19363 start.go:902] validating driver "kvm2" against &{Name:functional-741304 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-741304 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.230 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 19:38:21.719736   19363 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 19:38:21.721855   19363 out.go:177] 
	W0130 19:38:21.723092   19363 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0130 19:38:21.724361   19363 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-741304 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-741304 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-h4bf5" [403ae884-cf63-4c9e-97c3-3eaa2864f0db] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-h4bf5" [403ae884-cf63-4c9e-97c3-3eaa2864f0db] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.005589604s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.50.230:30941
functional_test.go:1671: http://192.168.50.230:30941: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-h4bf5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.230:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.230:30941
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (55.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b5cc539f-5052-4717-8dd7-d3ebdbd7ef60] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.007251519s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-741304 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-741304 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-741304 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-741304 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-741304 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3ab3c644-eb1f-4f33-9167-3634ee23c991] Pending
helpers_test.go:344: "sp-pod" [3ab3c644-eb1f-4f33-9167-3634ee23c991] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3ab3c644-eb1f-4f33-9167-3634ee23c991] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.271980864s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-741304 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-741304 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-741304 delete -f testdata/storage-provisioner/pod.yaml: (4.28035066s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-741304 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7ce30d95-4e85-4dca-a219-a581c694be55] Pending
E0130 19:38:39.711338   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
E0130 19:38:39.717112   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
E0130 19:38:39.727319   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
E0130 19:38:39.747606   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
E0130 19:38:39.787908   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
E0130 19:38:39.868234   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [7ce30d95-4e85-4dca-a219-a581c694be55] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0130 19:38:40.029275   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
E0130 19:38:40.349599   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
E0130 19:38:40.989778   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
E0130 19:38:42.270729   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [7ce30d95-4e85-4dca-a219-a581c694be55] Running
E0130 19:39:00.193271   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.004201586s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-741304 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (55.90s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh -n functional-741304 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 cp functional-741304:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3780297957/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh -n functional-741304 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh -n functional-741304 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-741304 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-qvk8d" [e3751e03-2011-4100-9b66-3580b4a432e2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-qvk8d" [e3751e03-2011-4100-9b66-3580b4a432e2] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.00840967s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-741304 exec mysql-859648c796-qvk8d -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-741304 exec mysql-859648c796-qvk8d -- mysql -ppassword -e "show databases;": exit status 1 (299.693001ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0130 19:38:49.952754   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
functional_test.go:1803: (dbg) Run:  kubectl --context functional-741304 exec mysql-859648c796-qvk8d -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-741304 exec mysql-859648c796-qvk8d -- mysql -ppassword -e "show databases;": exit status 1 (224.388907ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-741304 exec mysql-859648c796-qvk8d -- mysql -ppassword -e "show databases;"
2024/01/30 19:38:56 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (29.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11667/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "sudo cat /etc/test/nested/copy/11667/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11667.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "sudo cat /etc/ssl/certs/11667.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11667.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "sudo cat /usr/share/ca-certificates/11667.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/116672.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "sudo cat /etc/ssl/certs/116672.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/116672.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "sudo cat /usr/share/ca-certificates/116672.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-741304 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741304 ssh "sudo systemctl is-active docker": exit status 1 (248.214439ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741304 ssh "sudo systemctl is-active containerd": exit status 1 (237.37178ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-741304 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-741304 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-bb6gc" [f8185664-6ade-438a-9452-bbdc77ad60c0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-bb6gc" [f8185664-6ade-438a-9452-bbdc77ad60c0] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.004239684s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "265.503847ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "61.180024ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "307.872636ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "61.142281ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-741304 /tmp/TestFunctionalparallelMountCmdany-port3145212625/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1706643491164240135" to /tmp/TestFunctionalparallelMountCmdany-port3145212625/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1706643491164240135" to /tmp/TestFunctionalparallelMountCmdany-port3145212625/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1706643491164240135" to /tmp/TestFunctionalparallelMountCmdany-port3145212625/001/test-1706643491164240135
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741304 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (277.583486ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 30 19:38 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 30 19:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 30 19:38 test-1706643491164240135
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh cat /mount-9p/test-1706643491164240135
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-741304 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [bb5959cb-7ed1-4a29-baa8-16b65de6b302] Pending
helpers_test.go:344: "busybox-mount" [bb5959cb-7ed1-4a29-baa8-16b65de6b302] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [bb5959cb-7ed1-4a29-baa8-16b65de6b302] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [bb5959cb-7ed1-4a29-baa8-16b65de6b302] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.005066403s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-741304 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-741304 /tmp/TestFunctionalparallelMountCmdany-port3145212625/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-741304 /tmp/TestFunctionalparallelMountCmdspecific-port93912352/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741304 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (280.474913ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-741304 /tmp/TestFunctionalparallelMountCmdspecific-port93912352/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741304 ssh "sudo umount -f /mount-9p": exit status 1 (255.013659ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-741304 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-741304 /tmp/TestFunctionalparallelMountCmdspecific-port93912352/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 service list -o json
functional_test.go:1490: Took "390.907383ms" to run "out/minikube-linux-amd64 -p functional-741304 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.50.230:32483
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-741304 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-741304 image ls --format short --alsologtostderr:
I0130 19:38:30.489993   20173 out.go:296] Setting OutFile to fd 1 ...
I0130 19:38:30.490125   20173 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:38:30.490134   20173 out.go:309] Setting ErrFile to fd 2...
I0130 19:38:30.490139   20173 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:38:30.490356   20173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
I0130 19:38:30.490899   20173 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 19:38:30.490993   20173 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 19:38:30.491383   20173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 19:38:30.491421   20173 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:38:30.505295   20173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
I0130 19:38:30.505690   20173 main.go:141] libmachine: () Calling .GetVersion
I0130 19:38:30.506196   20173 main.go:141] libmachine: Using API Version  1
I0130 19:38:30.506217   20173 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:38:30.506572   20173 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:38:30.506746   20173 main.go:141] libmachine: (functional-741304) Calling .GetState
I0130 19:38:30.508450   20173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 19:38:30.508485   20173 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:38:30.521795   20173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36909
I0130 19:38:30.522131   20173 main.go:141] libmachine: () Calling .GetVersion
I0130 19:38:30.522551   20173 main.go:141] libmachine: Using API Version  1
I0130 19:38:30.522594   20173 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:38:30.522849   20173 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:38:30.522984   20173 main.go:141] libmachine: (functional-741304) Calling .DriverName
I0130 19:38:30.523145   20173 ssh_runner.go:195] Run: systemctl --version
I0130 19:38:30.523168   20173 main.go:141] libmachine: (functional-741304) Calling .GetSSHHostname
I0130 19:38:30.525506   20173 main.go:141] libmachine: (functional-741304) DBG | domain functional-741304 has defined MAC address 52:54:00:44:f0:f0 in network mk-functional-741304
I0130 19:38:30.525883   20173 main.go:141] libmachine: (functional-741304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:f0", ip: ""} in network mk-functional-741304: {Iface:virbr1 ExpiryTime:2024-01-30 20:35:42 +0000 UTC Type:0 Mac:52:54:00:44:f0:f0 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:functional-741304 Clientid:01:52:54:00:44:f0:f0}
I0130 19:38:30.525924   20173 main.go:141] libmachine: (functional-741304) DBG | domain functional-741304 has defined IP address 192.168.50.230 and MAC address 52:54:00:44:f0:f0 in network mk-functional-741304
I0130 19:38:30.525991   20173 main.go:141] libmachine: (functional-741304) Calling .GetSSHPort
I0130 19:38:30.526149   20173 main.go:141] libmachine: (functional-741304) Calling .GetSSHKeyPath
I0130 19:38:30.526275   20173 main.go:141] libmachine: (functional-741304) Calling .GetSSHUsername
I0130 19:38:30.526378   20173 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/functional-741304/id_rsa Username:docker}
I0130 19:38:30.618426   20173 ssh_runner.go:195] Run: sudo crictl images --output json
I0130 19:38:30.663877   20173 main.go:141] libmachine: Making call to close driver server
I0130 19:38:30.663892   20173 main.go:141] libmachine: (functional-741304) Calling .Close
I0130 19:38:30.664161   20173 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:38:30.664175   20173 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 19:38:30.664187   20173 main.go:141] libmachine: Making call to close driver server
I0130 19:38:30.664213   20173 main.go:141] libmachine: (functional-741304) Calling .Close
I0130 19:38:30.664436   20173 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:38:30.664453   20173 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-741304 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | latest             | a8758716bb6aa | 191MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/my-image                      | functional-741304  | 0963eaa3c6610 | 1.47MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-741304 image ls --format table --alsologtostderr:
I0130 19:38:44.398365   20415 out.go:296] Setting OutFile to fd 1 ...
I0130 19:38:44.398521   20415 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:38:44.398532   20415 out.go:309] Setting ErrFile to fd 2...
I0130 19:38:44.398539   20415 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:38:44.398815   20415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
I0130 19:38:44.399679   20415 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 19:38:44.399791   20415 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 19:38:44.400296   20415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 19:38:44.400352   20415 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:38:44.415285   20415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45363
I0130 19:38:44.415763   20415 main.go:141] libmachine: () Calling .GetVersion
I0130 19:38:44.416387   20415 main.go:141] libmachine: Using API Version  1
I0130 19:38:44.416414   20415 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:38:44.416741   20415 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:38:44.416939   20415 main.go:141] libmachine: (functional-741304) Calling .GetState
I0130 19:38:44.418920   20415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 19:38:44.418968   20415 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:38:44.435569   20415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42731
I0130 19:38:44.435945   20415 main.go:141] libmachine: () Calling .GetVersion
I0130 19:38:44.436494   20415 main.go:141] libmachine: Using API Version  1
I0130 19:38:44.436518   20415 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:38:44.436861   20415 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:38:44.437097   20415 main.go:141] libmachine: (functional-741304) Calling .DriverName
I0130 19:38:44.437290   20415 ssh_runner.go:195] Run: systemctl --version
I0130 19:38:44.437324   20415 main.go:141] libmachine: (functional-741304) Calling .GetSSHHostname
I0130 19:38:44.440631   20415 main.go:141] libmachine: (functional-741304) DBG | domain functional-741304 has defined MAC address 52:54:00:44:f0:f0 in network mk-functional-741304
I0130 19:38:44.441061   20415 main.go:141] libmachine: (functional-741304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:f0", ip: ""} in network mk-functional-741304: {Iface:virbr1 ExpiryTime:2024-01-30 20:35:42 +0000 UTC Type:0 Mac:52:54:00:44:f0:f0 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:functional-741304 Clientid:01:52:54:00:44:f0:f0}
I0130 19:38:44.441125   20415 main.go:141] libmachine: (functional-741304) DBG | domain functional-741304 has defined IP address 192.168.50.230 and MAC address 52:54:00:44:f0:f0 in network mk-functional-741304
I0130 19:38:44.441211   20415 main.go:141] libmachine: (functional-741304) Calling .GetSSHPort
I0130 19:38:44.441373   20415 main.go:141] libmachine: (functional-741304) Calling .GetSSHKeyPath
I0130 19:38:44.441850   20415 main.go:141] libmachine: (functional-741304) Calling .GetSSHUsername
I0130 19:38:44.442047   20415 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/functional-741304/id_rsa Username:docker}
I0130 19:38:44.565464   20415 ssh_runner.go:195] Run: sudo crictl images --output json
I0130 19:38:44.619624   20415 main.go:141] libmachine: Making call to close driver server
I0130 19:38:44.619641   20415 main.go:141] libmachine: (functional-741304) Calling .Close
I0130 19:38:44.619919   20415 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:38:44.619941   20415 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 19:38:44.619954   20415 main.go:141] libmachine: Making call to close driver server
I0130 19:38:44.619963   20415 main.go:141] libmachine: (functional-741304) Calling .Close
I0130 19:38:44.620227   20415 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:38:44.620263   20415 main.go:141] libmachine: Making call to close connection to plugin binary
E0130 19:38:44.831656   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-741304 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d
8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"dfd303e5ad6c513652ecb6214e791f2e1add376d3dc18b54e01cb24bc40fe9b9","repoDigests":["docker.io/library/5280fd9c925ee43ce0d7b56f9e1b1cdae4c875a707a7ce95be5aeaafbfeda8e8-tmp@sha256:8118448e2192959fa160f505843abf8fec912a3717a12ee932f9b0528bee4ecb"],"repoTags":[],"size":"1466018"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38
f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"0963eaa3c661077a482734518f52094db91576f7bd86af6f9eb9b3dfa295230c","repoDigests":["localhost/my-image@sha256:fe2289383d86eaa558357f67f225a1e76508212ade5715d553faec8f10b53fc6"],"repoTags":["localhost/my-image:functional-741304"],"size":"1468600"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd
45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6","repoDigests":["docker.io/library/nginx@sha256:161ef4b1bf7effb350a2a9625cb2b59f69d54ec6059a8a155a1438d0439c593c","docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac"],"repoTags":["docker.io/library/nginx:latest"],"size":"190867606"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"7f
e0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5107333e08a87b836d48ff7528b1e84b9c8
6781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.i
o/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-741304 image ls --format json --alsologtostderr:
I0130 19:38:44.103756   20392 out.go:296] Setting OutFile to fd 1 ...
I0130 19:38:44.103861   20392 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:38:44.103869   20392 out.go:309] Setting ErrFile to fd 2...
I0130 19:38:44.103874   20392 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:38:44.104067   20392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
I0130 19:38:44.106148   20392 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 19:38:44.106274   20392 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 19:38:44.106667   20392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 19:38:44.106710   20392 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:38:44.120404   20392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
I0130 19:38:44.120833   20392 main.go:141] libmachine: () Calling .GetVersion
I0130 19:38:44.121388   20392 main.go:141] libmachine: Using API Version  1
I0130 19:38:44.121420   20392 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:38:44.121734   20392 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:38:44.121952   20392 main.go:141] libmachine: (functional-741304) Calling .GetState
I0130 19:38:44.123836   20392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 19:38:44.123881   20392 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:38:44.137641   20392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36813
I0130 19:38:44.138088   20392 main.go:141] libmachine: () Calling .GetVersion
I0130 19:38:44.138525   20392 main.go:141] libmachine: Using API Version  1
I0130 19:38:44.138570   20392 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:38:44.138907   20392 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:38:44.139059   20392 main.go:141] libmachine: (functional-741304) Calling .DriverName
I0130 19:38:44.139242   20392 ssh_runner.go:195] Run: systemctl --version
I0130 19:38:44.139294   20392 main.go:141] libmachine: (functional-741304) Calling .GetSSHHostname
I0130 19:38:44.141884   20392 main.go:141] libmachine: (functional-741304) DBG | domain functional-741304 has defined MAC address 52:54:00:44:f0:f0 in network mk-functional-741304
I0130 19:38:44.142292   20392 main.go:141] libmachine: (functional-741304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:f0", ip: ""} in network mk-functional-741304: {Iface:virbr1 ExpiryTime:2024-01-30 20:35:42 +0000 UTC Type:0 Mac:52:54:00:44:f0:f0 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:functional-741304 Clientid:01:52:54:00:44:f0:f0}
I0130 19:38:44.142315   20392 main.go:141] libmachine: (functional-741304) DBG | domain functional-741304 has defined IP address 192.168.50.230 and MAC address 52:54:00:44:f0:f0 in network mk-functional-741304
I0130 19:38:44.142485   20392 main.go:141] libmachine: (functional-741304) Calling .GetSSHPort
I0130 19:38:44.142635   20392 main.go:141] libmachine: (functional-741304) Calling .GetSSHKeyPath
I0130 19:38:44.142807   20392 main.go:141] libmachine: (functional-741304) Calling .GetSSHUsername
I0130 19:38:44.142931   20392 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/functional-741304/id_rsa Username:docker}
I0130 19:38:44.265183   20392 ssh_runner.go:195] Run: sudo crictl images --output json
I0130 19:38:44.320692   20392 main.go:141] libmachine: Making call to close driver server
I0130 19:38:44.320711   20392 main.go:141] libmachine: (functional-741304) Calling .Close
I0130 19:38:44.320992   20392 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:38:44.321020   20392 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 19:38:44.321050   20392 main.go:141] libmachine: Making call to close driver server
I0130 19:38:44.321047   20392 main.go:141] libmachine: (functional-741304) DBG | Closing plugin on server side
I0130 19:38:44.321067   20392 main.go:141] libmachine: (functional-741304) Calling .Close
I0130 19:38:44.321310   20392 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:38:44.321354   20392 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-741304 image ls --format yaml --alsologtostderr:
- id: a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6
repoDigests:
- docker.io/library/nginx@sha256:161ef4b1bf7effb350a2a9625cb2b59f69d54ec6059a8a155a1438d0439c593c
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-741304 image ls --format yaml --alsologtostderr:
I0130 19:38:30.722408   20197 out.go:296] Setting OutFile to fd 1 ...
I0130 19:38:30.722651   20197 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:38:30.722661   20197 out.go:309] Setting ErrFile to fd 2...
I0130 19:38:30.722665   20197 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:38:30.722887   20197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
I0130 19:38:30.723497   20197 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 19:38:30.723613   20197 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 19:38:30.723988   20197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 19:38:30.724040   20197 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:38:30.738282   20197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
I0130 19:38:30.738686   20197 main.go:141] libmachine: () Calling .GetVersion
I0130 19:38:30.739302   20197 main.go:141] libmachine: Using API Version  1
I0130 19:38:30.739326   20197 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:38:30.739697   20197 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:38:30.739861   20197 main.go:141] libmachine: (functional-741304) Calling .GetState
I0130 19:38:30.741771   20197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 19:38:30.741812   20197 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:38:30.755604   20197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
I0130 19:38:30.755966   20197 main.go:141] libmachine: () Calling .GetVersion
I0130 19:38:30.756372   20197 main.go:141] libmachine: Using API Version  1
I0130 19:38:30.756394   20197 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:38:30.756711   20197 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:38:30.756895   20197 main.go:141] libmachine: (functional-741304) Calling .DriverName
I0130 19:38:30.757081   20197 ssh_runner.go:195] Run: systemctl --version
I0130 19:38:30.757103   20197 main.go:141] libmachine: (functional-741304) Calling .GetSSHHostname
I0130 19:38:30.760080   20197 main.go:141] libmachine: (functional-741304) DBG | domain functional-741304 has defined MAC address 52:54:00:44:f0:f0 in network mk-functional-741304
I0130 19:38:30.760511   20197 main.go:141] libmachine: (functional-741304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:f0", ip: ""} in network mk-functional-741304: {Iface:virbr1 ExpiryTime:2024-01-30 20:35:42 +0000 UTC Type:0 Mac:52:54:00:44:f0:f0 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:functional-741304 Clientid:01:52:54:00:44:f0:f0}
I0130 19:38:30.760545   20197 main.go:141] libmachine: (functional-741304) DBG | domain functional-741304 has defined IP address 192.168.50.230 and MAC address 52:54:00:44:f0:f0 in network mk-functional-741304
I0130 19:38:30.760674   20197 main.go:141] libmachine: (functional-741304) Calling .GetSSHPort
I0130 19:38:30.760820   20197 main.go:141] libmachine: (functional-741304) Calling .GetSSHKeyPath
I0130 19:38:30.760979   20197 main.go:141] libmachine: (functional-741304) Calling .GetSSHUsername
I0130 19:38:30.761124   20197 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/functional-741304/id_rsa Username:docker}
I0130 19:38:30.849361   20197 ssh_runner.go:195] Run: sudo crictl images --output json
I0130 19:38:30.890655   20197 main.go:141] libmachine: Making call to close driver server
I0130 19:38:30.890667   20197 main.go:141] libmachine: (functional-741304) Calling .Close
I0130 19:38:30.890916   20197 main.go:141] libmachine: (functional-741304) DBG | Closing plugin on server side
I0130 19:38:30.890942   20197 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:38:30.890960   20197 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 19:38:30.890977   20197 main.go:141] libmachine: Making call to close driver server
I0130 19:38:30.890990   20197 main.go:141] libmachine: (functional-741304) Calling .Close
I0130 19:38:30.891201   20197 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:38:30.891213   20197 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 19:38:30.891227   20197 main.go:141] libmachine: (functional-741304) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (13.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741304 ssh pgrep buildkitd: exit status 1 (211.608082ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 image build -t localhost/my-image:functional-741304 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-741304 image build -t localhost/my-image:functional-741304 testdata/build --alsologtostderr: (12.66380142s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-741304 image build -t localhost/my-image:functional-741304 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> dfd303e5ad6
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-741304
--> 0963eaa3c66
Successfully tagged localhost/my-image:functional-741304
0963eaa3c661077a482734518f52094db91576f7bd86af6f9eb9b3dfa295230c
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-741304 image build -t localhost/my-image:functional-741304 testdata/build --alsologtostderr:
I0130 19:38:31.165107   20251 out.go:296] Setting OutFile to fd 1 ...
I0130 19:38:31.165374   20251 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:38:31.165383   20251 out.go:309] Setting ErrFile to fd 2...
I0130 19:38:31.165388   20251 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:38:31.165578   20251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
I0130 19:38:31.166114   20251 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 19:38:31.166590   20251 config.go:182] Loaded profile config "functional-741304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 19:38:31.166972   20251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 19:38:31.167010   20251 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:38:31.180886   20251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44645
I0130 19:38:31.181318   20251 main.go:141] libmachine: () Calling .GetVersion
I0130 19:38:31.181857   20251 main.go:141] libmachine: Using API Version  1
I0130 19:38:31.181876   20251 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:38:31.182241   20251 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:38:31.182422   20251 main.go:141] libmachine: (functional-741304) Calling .GetState
I0130 19:38:31.184162   20251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 19:38:31.184203   20251 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:38:31.198119   20251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44187
I0130 19:38:31.198490   20251 main.go:141] libmachine: () Calling .GetVersion
I0130 19:38:31.198846   20251 main.go:141] libmachine: Using API Version  1
I0130 19:38:31.198869   20251 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:38:31.199125   20251 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:38:31.199345   20251 main.go:141] libmachine: (functional-741304) Calling .DriverName
I0130 19:38:31.199543   20251 ssh_runner.go:195] Run: systemctl --version
I0130 19:38:31.199567   20251 main.go:141] libmachine: (functional-741304) Calling .GetSSHHostname
I0130 19:38:31.202683   20251 main.go:141] libmachine: (functional-741304) DBG | domain functional-741304 has defined MAC address 52:54:00:44:f0:f0 in network mk-functional-741304
I0130 19:38:31.203023   20251 main.go:141] libmachine: (functional-741304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:f0", ip: ""} in network mk-functional-741304: {Iface:virbr1 ExpiryTime:2024-01-30 20:35:42 +0000 UTC Type:0 Mac:52:54:00:44:f0:f0 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:functional-741304 Clientid:01:52:54:00:44:f0:f0}
I0130 19:38:31.203060   20251 main.go:141] libmachine: (functional-741304) DBG | domain functional-741304 has defined IP address 192.168.50.230 and MAC address 52:54:00:44:f0:f0 in network mk-functional-741304
I0130 19:38:31.203197   20251 main.go:141] libmachine: (functional-741304) Calling .GetSSHPort
I0130 19:38:31.203383   20251 main.go:141] libmachine: (functional-741304) Calling .GetSSHKeyPath
I0130 19:38:31.203554   20251 main.go:141] libmachine: (functional-741304) Calling .GetSSHUsername
I0130 19:38:31.203681   20251 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/functional-741304/id_rsa Username:docker}
I0130 19:38:31.315897   20251 build_images.go:151] Building image from path: /tmp/build.1434340875.tar
I0130 19:38:31.315977   20251 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0130 19:38:31.329223   20251 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1434340875.tar
I0130 19:38:31.337248   20251 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1434340875.tar: stat -c "%s %y" /var/lib/minikube/build/build.1434340875.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1434340875.tar': No such file or directory
I0130 19:38:31.337284   20251 ssh_runner.go:362] scp /tmp/build.1434340875.tar --> /var/lib/minikube/build/build.1434340875.tar (3072 bytes)
I0130 19:38:31.365225   20251 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1434340875
I0130 19:38:31.374918   20251 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1434340875 -xf /var/lib/minikube/build/build.1434340875.tar
I0130 19:38:31.384046   20251 crio.go:297] Building image: /var/lib/minikube/build/build.1434340875
I0130 19:38:31.384100   20251 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-741304 /var/lib/minikube/build/build.1434340875 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0130 19:38:43.724909   20251 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-741304 /var/lib/minikube/build/build.1434340875 --cgroup-manager=cgroupfs: (12.340788591s)
I0130 19:38:43.724973   20251 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1434340875
I0130 19:38:43.746144   20251 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1434340875.tar
I0130 19:38:43.767568   20251 build_images.go:207] Built localhost/my-image:functional-741304 from /tmp/build.1434340875.tar
I0130 19:38:43.767594   20251 build_images.go:123] succeeded building to: functional-741304
I0130 19:38:43.767601   20251 build_images.go:124] failed building to: 
I0130 19:38:43.767655   20251 main.go:141] libmachine: Making call to close driver server
I0130 19:38:43.767669   20251 main.go:141] libmachine: (functional-741304) Calling .Close
I0130 19:38:43.767953   20251 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:38:43.767974   20251 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 19:38:43.767985   20251 main.go:141] libmachine: Making call to close driver server
I0130 19:38:43.768006   20251 main.go:141] libmachine: (functional-741304) DBG | Closing plugin on server side
I0130 19:38:43.768085   20251 main.go:141] libmachine: (functional-741304) Calling .Close
I0130 19:38:43.768400   20251 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:38:43.768416   20251 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (13.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.014167147s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-741304
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.50.230:32483
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-741304 /tmp/TestFunctionalparallelMountCmdVerifyCleanup173082199/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-741304 /tmp/TestFunctionalparallelMountCmdVerifyCleanup173082199/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-741304 /tmp/TestFunctionalparallelMountCmdVerifyCleanup173082199/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741304 ssh "findmnt -T" /mount1: exit status 1 (329.622525ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-741304 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-741304 /tmp/TestFunctionalparallelMountCmdVerifyCleanup173082199/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-741304 /tmp/TestFunctionalparallelMountCmdVerifyCleanup173082199/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-741304 /tmp/TestFunctionalparallelMountCmdVerifyCleanup173082199/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 image rm gcr.io/google-containers/addon-resizer:functional-741304 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-741304 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-741304
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-741304
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-741304
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (128.32s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-223875 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0130 19:39:20.674279   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
E0130 19:40:01.635038   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-223875 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (2m8.315112777s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (128.32s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.94s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-223875 addons enable ingress --alsologtostderr -v=5
E0130 19:41:23.556048   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-223875 addons enable ingress --alsologtostderr -v=5: (16.938170703s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.94s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.56s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-223875 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (98.02s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-850708 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0130 19:44:29.695177   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 19:45:51.616373   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-850708 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m38.021156797s)
--- PASS: TestJSONOutput/start/Command (98.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-850708 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-850708 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-850708 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-850708 --output=json --user=testUser: (7.097624728s)
--- PASS: TestJSONOutput/stop/Command (7.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-315831 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-315831 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.567409ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"34672bbc-d727-40ca-81a7-5672ac10369a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-315831] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fd8c9580-2e20-49b2-b480-fe7b135f13d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18007"}}
	{"specversion":"1.0","id":"895bd4ca-25d4-4925-ba36-dd82b5dfcbd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8e938781-9aef-4087-9d2a-d86496549299","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig"}}
	{"specversion":"1.0","id":"af899b68-c709-44b7-a5ec-5368451e9c8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube"}}
	{"specversion":"1.0","id":"75a778ed-8c0d-4d19-b493-4df912a07a09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4c206c21-006c-40dc-a1bf-36cd3b6abd51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"be5bbe1b-445a-4391-a4c6-0d9908b778e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-315831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-315831
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (98.48s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-262460 --driver=kvm2  --container-runtime=crio
E0130 19:46:31.182298   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 19:46:31.187564   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 19:46:31.197819   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 19:46:31.218093   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 19:46:31.258341   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 19:46:31.338687   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 19:46:31.499081   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 19:46:31.819641   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 19:46:32.460561   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 19:46:33.741131   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 19:46:36.302927   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 19:46:41.423657   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 19:46:51.664501   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-262460 --driver=kvm2  --container-runtime=crio: (46.664156131s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-265530 --driver=kvm2  --container-runtime=crio
E0130 19:47:12.145467   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-265530 --driver=kvm2  --container-runtime=crio: (48.973732322s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-262460
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-265530
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-265530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-265530
helpers_test.go:175: Cleaning up "first-262460" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-262460
--- PASS: TestMinikubeProfile (98.48s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-452097 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0130 19:47:53.105835   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 19:48:07.774062   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-452097 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.370355663s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-452097 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-452097 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-465514 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0130 19:48:35.457919   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 19:48:39.711289   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-465514 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.712100372s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-465514 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-465514 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-452097 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-465514 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-465514 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-465514
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-465514: (1.208228334s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-465514
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-465514: (21.143376269s)
--- PASS: TestMountStart/serial/RestartStopped (22.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-465514 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-465514 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-572652 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0130 19:49:15.026232   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-572652 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m51.128156051s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-572652 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-572652 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-572652 -- rollout status deployment/busybox: (4.690368563s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-572652 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-572652 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-572652 -- exec busybox-5b5d89c9d6-f2vmn -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-572652 -- exec busybox-5b5d89c9d6-sbgq8 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-572652 -- exec busybox-5b5d89c9d6-f2vmn -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-572652 -- exec busybox-5b5d89c9d6-sbgq8 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-572652 -- exec busybox-5b5d89c9d6-f2vmn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-572652 -- exec busybox-5b5d89c9d6-sbgq8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-572652 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-572652 -- exec busybox-5b5d89c9d6-f2vmn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-572652 -- exec busybox-5b5d89c9d6-f2vmn -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-572652 -- exec busybox-5b5d89c9d6-sbgq8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-572652 -- exec busybox-5b5d89c9d6-sbgq8 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-572652 -v 3 --alsologtostderr
E0130 19:51:31.182235   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-572652 -v 3 --alsologtostderr: (45.448274717s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 status --alsologtostderr
E0130 19:51:58.866875   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
--- PASS: TestMultiNode/serial/AddNode (46.04s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-572652 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 cp testdata/cp-test.txt multinode-572652:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 ssh -n multinode-572652 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 cp multinode-572652:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile652618288/001/cp-test_multinode-572652.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 ssh -n multinode-572652 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 cp multinode-572652:/home/docker/cp-test.txt multinode-572652-m02:/home/docker/cp-test_multinode-572652_multinode-572652-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 ssh -n multinode-572652 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 ssh -n multinode-572652-m02 "sudo cat /home/docker/cp-test_multinode-572652_multinode-572652-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 cp multinode-572652:/home/docker/cp-test.txt multinode-572652-m03:/home/docker/cp-test_multinode-572652_multinode-572652-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 ssh -n multinode-572652 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 ssh -n multinode-572652-m03 "sudo cat /home/docker/cp-test_multinode-572652_multinode-572652-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 cp testdata/cp-test.txt multinode-572652-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 ssh -n multinode-572652-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 cp multinode-572652-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile652618288/001/cp-test_multinode-572652-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 ssh -n multinode-572652-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 cp multinode-572652-m02:/home/docker/cp-test.txt multinode-572652:/home/docker/cp-test_multinode-572652-m02_multinode-572652.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 ssh -n multinode-572652-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 ssh -n multinode-572652 "sudo cat /home/docker/cp-test_multinode-572652-m02_multinode-572652.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 cp multinode-572652-m02:/home/docker/cp-test.txt multinode-572652-m03:/home/docker/cp-test_multinode-572652-m02_multinode-572652-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 ssh -n multinode-572652-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 ssh -n multinode-572652-m03 "sudo cat /home/docker/cp-test_multinode-572652-m02_multinode-572652-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 cp testdata/cp-test.txt multinode-572652-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 ssh -n multinode-572652-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 cp multinode-572652-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile652618288/001/cp-test_multinode-572652-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 ssh -n multinode-572652-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 cp multinode-572652-m03:/home/docker/cp-test.txt multinode-572652:/home/docker/cp-test_multinode-572652-m03_multinode-572652.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 ssh -n multinode-572652-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 ssh -n multinode-572652 "sudo cat /home/docker/cp-test_multinode-572652-m03_multinode-572652.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 cp multinode-572652-m03:/home/docker/cp-test.txt multinode-572652-m02:/home/docker/cp-test_multinode-572652-m03_multinode-572652-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 ssh -n multinode-572652-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 ssh -n multinode-572652-m02 "sudo cat /home/docker/cp-test_multinode-572652-m03_multinode-572652-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.53s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-572652 node stop m03: (1.424435626s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-572652 status: exit status 7 (441.880168ms)

                                                
                                                
-- stdout --
	multinode-572652
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-572652-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-572652-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-572652 status --alsologtostderr: exit status 7 (437.558542ms)

                                                
                                                
-- stdout --
	multinode-572652
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-572652-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-572652-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:52:09.074509   27397 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:52:09.074633   27397 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:52:09.074642   27397 out.go:309] Setting ErrFile to fd 2...
	I0130 19:52:09.074647   27397 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:52:09.074818   27397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 19:52:09.074988   27397 out.go:303] Setting JSON to false
	I0130 19:52:09.075009   27397 mustload.go:65] Loading cluster: multinode-572652
	I0130 19:52:09.075119   27397 notify.go:220] Checking for updates...
	I0130 19:52:09.075426   27397 config.go:182] Loaded profile config "multinode-572652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 19:52:09.075445   27397 status.go:255] checking status of multinode-572652 ...
	I0130 19:52:09.075905   27397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:52:09.075949   27397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:52:09.091492   27397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42169
	I0130 19:52:09.091885   27397 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:52:09.092557   27397 main.go:141] libmachine: Using API Version  1
	I0130 19:52:09.092583   27397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:52:09.092911   27397 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:52:09.093114   27397 main.go:141] libmachine: (multinode-572652) Calling .GetState
	I0130 19:52:09.094706   27397 status.go:330] multinode-572652 host status = "Running" (err=<nil>)
	I0130 19:52:09.094727   27397 host.go:66] Checking if "multinode-572652" exists ...
	I0130 19:52:09.095026   27397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:52:09.095075   27397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:52:09.108975   27397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45621
	I0130 19:52:09.109335   27397 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:52:09.109746   27397 main.go:141] libmachine: Using API Version  1
	I0130 19:52:09.109767   27397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:52:09.110066   27397 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:52:09.110228   27397 main.go:141] libmachine: (multinode-572652) Calling .GetIP
	I0130 19:52:09.113099   27397 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:52:09.113487   27397 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:49:29 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 19:52:09.113511   27397 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:52:09.113642   27397 host.go:66] Checking if "multinode-572652" exists ...
	I0130 19:52:09.113912   27397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:52:09.113954   27397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:52:09.127892   27397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38865
	I0130 19:52:09.128209   27397 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:52:09.128627   27397 main.go:141] libmachine: Using API Version  1
	I0130 19:52:09.128647   27397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:52:09.128936   27397 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:52:09.129112   27397 main.go:141] libmachine: (multinode-572652) Calling .DriverName
	I0130 19:52:09.129303   27397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0130 19:52:09.129325   27397 main.go:141] libmachine: (multinode-572652) Calling .GetSSHHostname
	I0130 19:52:09.131798   27397 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:52:09.132172   27397 main.go:141] libmachine: (multinode-572652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1f:80", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:49:29 +0000 UTC Type:0 Mac:52:54:00:8f:1f:80 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-572652 Clientid:01:52:54:00:8f:1f:80}
	I0130 19:52:09.132199   27397 main.go:141] libmachine: (multinode-572652) DBG | domain multinode-572652 has defined IP address 192.168.39.186 and MAC address 52:54:00:8f:1f:80 in network mk-multinode-572652
	I0130 19:52:09.132289   27397 main.go:141] libmachine: (multinode-572652) Calling .GetSSHPort
	I0130 19:52:09.132464   27397 main.go:141] libmachine: (multinode-572652) Calling .GetSSHKeyPath
	I0130 19:52:09.132615   27397 main.go:141] libmachine: (multinode-572652) Calling .GetSSHUsername
	I0130 19:52:09.132738   27397 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652/id_rsa Username:docker}
	I0130 19:52:09.219944   27397 ssh_runner.go:195] Run: systemctl --version
	I0130 19:52:09.225622   27397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 19:52:09.240089   27397 kubeconfig.go:92] found "multinode-572652" server: "https://192.168.39.186:8443"
	I0130 19:52:09.240113   27397 api_server.go:166] Checking apiserver status ...
	I0130 19:52:09.240152   27397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 19:52:09.253225   27397 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1063/cgroup
	I0130 19:52:09.261924   27397 api_server.go:182] apiserver freezer: "6:freezer:/kubepods/burstable/podd6f18dcbbdea790709196864d2f77f8b/crio-afabd778813c77851eb20a610ddec83219d2dcb52cfb15f774a00da198668328"
	I0130 19:52:09.261981   27397 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd6f18dcbbdea790709196864d2f77f8b/crio-afabd778813c77851eb20a610ddec83219d2dcb52cfb15f774a00da198668328/freezer.state
	I0130 19:52:09.271005   27397 api_server.go:204] freezer state: "THAWED"
	I0130 19:52:09.271028   27397 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0130 19:52:09.275757   27397 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I0130 19:52:09.275777   27397 status.go:421] multinode-572652 apiserver status = Running (err=<nil>)
	I0130 19:52:09.275792   27397 status.go:257] multinode-572652 status: &{Name:multinode-572652 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0130 19:52:09.275818   27397 status.go:255] checking status of multinode-572652-m02 ...
	I0130 19:52:09.276113   27397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:52:09.276156   27397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:52:09.290677   27397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I0130 19:52:09.291084   27397 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:52:09.291546   27397 main.go:141] libmachine: Using API Version  1
	I0130 19:52:09.291570   27397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:52:09.291886   27397 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:52:09.292068   27397 main.go:141] libmachine: (multinode-572652-m02) Calling .GetState
	I0130 19:52:09.293668   27397 status.go:330] multinode-572652-m02 host status = "Running" (err=<nil>)
	I0130 19:52:09.293693   27397 host.go:66] Checking if "multinode-572652-m02" exists ...
	I0130 19:52:09.294054   27397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:52:09.294096   27397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:52:09.308040   27397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41093
	I0130 19:52:09.308403   27397 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:52:09.308835   27397 main.go:141] libmachine: Using API Version  1
	I0130 19:52:09.308865   27397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:52:09.309143   27397 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:52:09.309341   27397 main.go:141] libmachine: (multinode-572652-m02) Calling .GetIP
	I0130 19:52:09.312035   27397 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 19:52:09.312422   27397 main.go:141] libmachine: (multinode-572652-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:12:51", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:50:37 +0000 UTC Type:0 Mac:52:54:00:64:12:51 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-572652-m02 Clientid:01:52:54:00:64:12:51}
	I0130 19:52:09.312455   27397 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 19:52:09.312572   27397 host.go:66] Checking if "multinode-572652-m02" exists ...
	I0130 19:52:09.312857   27397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:52:09.312898   27397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:52:09.326594   27397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36863
	I0130 19:52:09.327028   27397 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:52:09.327475   27397 main.go:141] libmachine: Using API Version  1
	I0130 19:52:09.327494   27397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:52:09.327855   27397 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:52:09.328084   27397 main.go:141] libmachine: (multinode-572652-m02) Calling .DriverName
	I0130 19:52:09.328294   27397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0130 19:52:09.328316   27397 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHHostname
	I0130 19:52:09.331132   27397 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 19:52:09.331581   27397 main.go:141] libmachine: (multinode-572652-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:12:51", ip: ""} in network mk-multinode-572652: {Iface:virbr1 ExpiryTime:2024-01-30 20:50:37 +0000 UTC Type:0 Mac:52:54:00:64:12:51 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-572652-m02 Clientid:01:52:54:00:64:12:51}
	I0130 19:52:09.331622   27397 main.go:141] libmachine: (multinode-572652-m02) DBG | domain multinode-572652-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:64:12:51 in network mk-multinode-572652
	I0130 19:52:09.331712   27397 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHPort
	I0130 19:52:09.331880   27397 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHKeyPath
	I0130 19:52:09.332062   27397 main.go:141] libmachine: (multinode-572652-m02) Calling .GetSSHUsername
	I0130 19:52:09.332199   27397 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4458/.minikube/machines/multinode-572652-m02/id_rsa Username:docker}
	I0130 19:52:09.426167   27397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 19:52:09.438643   27397 status.go:257] multinode-572652-m02 status: &{Name:multinode-572652-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0130 19:52:09.438695   27397 status.go:255] checking status of multinode-572652-m03 ...
	I0130 19:52:09.439009   27397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 19:52:09.439043   27397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:52:09.453114   27397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45561
	I0130 19:52:09.453463   27397 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:52:09.453917   27397 main.go:141] libmachine: Using API Version  1
	I0130 19:52:09.453940   27397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:52:09.454224   27397 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:52:09.454423   27397 main.go:141] libmachine: (multinode-572652-m03) Calling .GetState
	I0130 19:52:09.455912   27397 status.go:330] multinode-572652-m03 host status = "Stopped" (err=<nil>)
	I0130 19:52:09.455927   27397 status.go:343] host is not running, skipping remaining checks
	I0130 19:52:09.455934   27397 status.go:257] multinode-572652-m03 status: &{Name:multinode-572652-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-572652 node start m03 --alsologtostderr: (30.972251045s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-572652 node delete m03: (1.217036463s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (446.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-572652 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0130 20:08:07.772312   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 20:08:39.710949   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
E0130 20:11:31.181576   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 20:11:42.759033   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
E0130 20:13:07.773772   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 20:13:39.710680   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-572652 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m26.211900721s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-572652 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (446.76s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-572652
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-572652-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-572652-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (75.634748ms)

                                                
                                                
-- stdout --
	* [multinode-572652-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18007
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-572652-m02' is duplicated with machine name 'multinode-572652-m02' in profile 'multinode-572652'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-572652-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-572652-m03 --driver=kvm2  --container-runtime=crio: (47.172511776s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-572652
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-572652: exit status 80 (229.502647ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-572652
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-572652-m03 already exists in multinode-572652-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-572652-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.51s)

                                                
                                    
x
+
TestScheduledStopUnix (118.72s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-027447 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-027447 --memory=2048 --driver=kvm2  --container-runtime=crio: (46.961354071s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-027447 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-027447 -n scheduled-stop-027447
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-027447 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-027447 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-027447 -n scheduled-stop-027447
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-027447
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-027447 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0130 20:21:31.182338   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-027447
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-027447: exit status 7 (84.444274ms)

                                                
                                                
-- stdout --
	scheduled-stop-027447
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-027447 -n scheduled-stop-027447
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-027447 -n scheduled-stop-027447: exit status 7 (74.636111ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-027447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-027447
--- PASS: TestScheduledStopUnix (118.72s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (158.74s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4165048219 start -p running-upgrade-215278 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4165048219 start -p running-upgrade-215278 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m2.127985764s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-215278 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-215278 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m33.091327787s)
helpers_test.go:175: Cleaning up "running-upgrade-215278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-215278
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-215278: (1.056674934s)
--- PASS: TestRunningBinaryUpgrade (158.74s)

                                                
                                    
x
+
TestKubernetesUpgrade (215.06s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-876229 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-876229 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m9.172615906s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-876229
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-876229: (2.13110799s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-876229 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-876229 status --format={{.Host}}: exit status 7 (86.725417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-876229 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0130 20:23:07.771150   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-876229 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m18.917231531s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-876229 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-876229 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-876229 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (108.505368ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-876229] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18007
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-876229
	    minikube start -p kubernetes-upgrade-876229 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8762292 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-876229 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-876229 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-876229 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.550908961s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-876229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-876229
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-876229: (1.036621886s)
--- PASS: TestKubernetesUpgrade (215.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-997045 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-997045 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (994.191902ms)

                                                
                                                
-- stdout --
	* [false-997045] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18007
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 20:21:37.043512   35812 out.go:296] Setting OutFile to fd 1 ...
	I0130 20:21:37.043770   35812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:21:37.043781   35812 out.go:309] Setting ErrFile to fd 2...
	I0130 20:21:37.043788   35812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:21:37.044004   35812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4458/.minikube/bin
	I0130 20:21:37.044595   35812 out.go:303] Setting JSON to false
	I0130 20:21:37.045428   35812 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3842,"bootTime":1706642255,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 20:21:37.045488   35812 start.go:138] virtualization: kvm guest
	I0130 20:21:37.047574   35812 out.go:177] * [false-997045] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 20:21:37.048991   35812 out.go:177]   - MINIKUBE_LOCATION=18007
	I0130 20:21:37.049039   35812 notify.go:220] Checking for updates...
	I0130 20:21:37.050283   35812 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 20:21:37.051578   35812 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	I0130 20:21:37.052846   35812 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	I0130 20:21:37.054388   35812 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 20:21:37.055751   35812 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 20:21:37.057816   35812 config.go:182] Loaded profile config "kubernetes-upgrade-876229": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 20:21:37.057967   35812 config.go:182] Loaded profile config "offline-crio-869267": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 20:21:37.058059   35812 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 20:21:37.975831   35812 out.go:177] * Using the kvm2 driver based on user configuration
	I0130 20:21:37.977336   35812 start.go:298] selected driver: kvm2
	I0130 20:21:37.977348   35812 start.go:902] validating driver "kvm2" against <nil>
	I0130 20:21:37.977363   35812 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 20:21:37.979298   35812 out.go:177] 
	W0130 20:21:37.980616   35812 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0130 20:21:37.981999   35812 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-997045 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-997045

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-997045

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-997045

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-997045

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-997045

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-997045

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-997045

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-997045

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-997045

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-997045

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-997045

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-997045" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-997045" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-997045

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997045"

                                                
                                                
----------------------- debugLogs end: false-997045 [took: 3.113899437s] --------------------------------
helpers_test.go:175: Cleaning up "false-997045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-997045
--- PASS: TestNetworkPlugins/group/false (4.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (185.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1753121720 start -p stopped-upgrade-549138 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1753121720 start -p stopped-upgrade-549138 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m51.005455414s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1753121720 -p stopped-upgrade-549138 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1753121720 -p stopped-upgrade-549138 stop: (2.134968072s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-549138 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-549138 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m11.869933962s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (185.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-924610 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-924610 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (74.984551ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-924610] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18007
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18007-4458/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4458/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (78.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-924610 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-924610 --driver=kvm2  --container-runtime=crio: (1m18.01759727s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-924610 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (78.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-924610 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0130 20:26:31.182264   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-924610 --no-kubernetes --driver=kvm2  --container-runtime=crio: (9.532437723s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-924610 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-924610 status -o json: exit status 2 (255.31287ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-924610","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-924610
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-924610: (1.172334485s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-924610 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-924610 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.110551546s)
--- PASS: TestNoKubernetes/serial/Start (29.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-549138
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                    
x
+
TestPause/serial/Start (108.53s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-922110 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-922110 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m48.529561179s)
--- PASS: TestPause/serial/Start (108.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-924610 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-924610 "sudo systemctl is-active --quiet service kubelet": exit status 1 (210.358217ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (29.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.266372987s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.882075128s)
--- PASS: TestNoKubernetes/serial/ProfileList (29.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-924610
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-924610: (2.450404849s)
--- PASS: TestNoKubernetes/serial/Stop (2.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (29.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-924610 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-924610 --driver=kvm2  --container-runtime=crio: (29.214655853s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (29.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (344.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-150971 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-150971 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (5m44.803879333s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (344.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (163.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-473743 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0130 20:28:07.771596   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-473743 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m43.068874977s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (163.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-924610 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-924610 "sudo systemctl is-active --quiet service kubelet": exit status 1 (215.770948ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (161.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-208583 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0130 20:28:22.759299   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
E0130 20:28:39.710854   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-208583 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (2m41.585490907s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (161.59s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (80.43s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-922110 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-922110 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m20.400437346s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (80.43s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-922110 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-922110 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-922110 --output=json --layout=cluster: exit status 2 (278.663281ms)

                                                
                                                
-- stdout --
	{"Name":"pause-922110","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-922110","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-922110 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-922110 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.05s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-922110 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-922110 --alsologtostderr -v=5: (1.054053946s)
--- PASS: TestPause/serial/DeletePaused (1.05s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.75s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (101.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-877742 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-877742 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m41.031724528s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (101.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (14.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-473743 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [76483155-3957-4487-a0a8-7c5511ea5fe4] Pending
helpers_test.go:344: "busybox" [76483155-3957-4487-a0a8-7c5511ea5fe4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [76483155-3957-4487-a0a8-7c5511ea5fe4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 14.00520756s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-473743 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (14.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-208583 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [689c9651-345a-43fd-aa34-90f6d5e6af09] Pending
helpers_test.go:344: "busybox" [689c9651-345a-43fd-aa34-90f6d5e6af09] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [689c9651-345a-43fd-aa34-90f6d5e6af09] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00413106s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-208583 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-473743 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-473743 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.096251171s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-473743 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-208583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-208583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.124489386s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-208583 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-877742 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8ba694f4-f618-4da4-99c5-2cc4268a3b18] Pending
helpers_test.go:344: "busybox" [8ba694f4-f618-4da4-99c5-2cc4268a3b18] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8ba694f4-f618-4da4-99c5-2cc4268a3b18] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004129919s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-877742 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-877742 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-877742 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.10467368s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-877742 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (670.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-473743 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-473743 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (11m9.804781321s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-473743 -n no-preload-473743
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (670.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-150971 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6fca0760-769d-44f4-98a6-0c83dcd130b9] Pending
helpers_test.go:344: "busybox" [6fca0760-769d-44f4-98a6-0c83dcd130b9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6fca0760-769d-44f4-98a6-0c83dcd130b9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003145049s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-150971 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (582.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-208583 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-208583 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (9m42.427931618s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208583 -n embed-certs-208583
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (582.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-150971 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-150971 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (826.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-877742 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-877742 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (13m46.441532754s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-877742 -n default-k8s-diff-port-877742
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (826.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (614.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-150971 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E0130 20:36:14.228697   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 20:36:31.182521   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 20:38:07.771005   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
E0130 20:38:39.711306   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
E0130 20:41:31.181493   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 20:43:07.771494   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-150971 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (10m14.086545344s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-150971 -n old-k8s-version-150971
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (614.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-564644 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-564644 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (59.964822683s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (124.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-997045 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0130 20:58:07.770930   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/functional-741304/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-997045 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m4.074428395s)
--- PASS: TestNetworkPlugins/group/auto/Start (124.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (103.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-997045 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0130 20:58:27.330978   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.crt: no such file or directory
E0130 20:58:27.336317   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.crt: no such file or directory
E0130 20:58:27.346654   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.crt: no such file or directory
E0130 20:58:27.367218   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.crt: no such file or directory
E0130 20:58:27.407392   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.crt: no such file or directory
E0130 20:58:27.487722   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.crt: no such file or directory
E0130 20:58:27.648236   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.crt: no such file or directory
E0130 20:58:27.969022   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.crt: no such file or directory
E0130 20:58:28.610165   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.crt: no such file or directory
E0130 20:58:29.890626   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.crt: no such file or directory
E0130 20:58:32.451817   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.crt: no such file or directory
E0130 20:58:37.572078   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.crt: no such file or directory
E0130 20:58:39.710986   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
E0130 20:58:47.812602   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-997045 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m43.85380389s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (103.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-564644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-564644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.41274393s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-564644 --alsologtostderr -v=3
E0130 20:59:08.293631   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-564644 --alsologtostderr -v=3: (11.133578943s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-564644 -n newest-cni-564644
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-564644 -n newest-cni-564644: exit status 7 (84.259459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-564644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (57.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-564644 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0130 20:59:49.254640   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-564644 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (56.796650309s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-564644 -n newest-cni-564644
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (57.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lf2vl" [af5c4a07-fc0f-41ad-a4ce-849faf43fb6e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005163182s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-997045 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (15.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-997045 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-92spc" [cb75343e-08e1-4469-8828-010a6b3df0d1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-92spc" [cb75343e-08e1-4469-8828-010a6b3df0d1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 15.006090223s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (15.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-997045 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-997045 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rt82q" [96e45774-0a6a-400f-afa7-8eaa531e570f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rt82q" [96e45774-0a6a-400f-afa7-8eaa531e570f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004232433s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-564644 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-564644 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-564644 -n newest-cni-564644
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-564644 -n newest-cni-564644: exit status 2 (287.35185ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-564644 -n newest-cni-564644
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-564644 -n newest-cni-564644: exit status 2 (334.94ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-564644 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-564644 --alsologtostderr -v=1: (1.077738582s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-564644 -n newest-cni-564644
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-564644 -n newest-cni-564644
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (98.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-997045 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-997045 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m38.583922395s)
--- PASS: TestNetworkPlugins/group/calico/Start (98.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (114.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-997045 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-997045 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m54.544101861s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (114.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-997045 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-997045 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-997045 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-997045 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-997045 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-997045 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (137.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-997045 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-997045 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m17.939238396s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (137.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (152.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-997045 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0130 21:00:39.213045   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.crt: no such file or directory
E0130 21:00:39.218305   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.crt: no such file or directory
E0130 21:00:39.229157   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.crt: no such file or directory
E0130 21:00:39.249768   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.crt: no such file or directory
E0130 21:00:39.290211   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.crt: no such file or directory
E0130 21:00:39.371171   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.crt: no such file or directory
E0130 21:00:39.531535   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.crt: no such file or directory
E0130 21:00:39.852706   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.crt: no such file or directory
E0130 21:00:40.492931   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.crt: no such file or directory
E0130 21:00:41.773480   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.crt: no such file or directory
E0130 21:00:44.334487   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.crt: no such file or directory
E0130 21:00:49.454770   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.crt: no such file or directory
E0130 21:00:59.695200   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.crt: no such file or directory
E0130 21:01:11.175394   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.crt: no such file or directory
E0130 21:01:20.175464   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.crt: no such file or directory
E0130 21:01:31.181461   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/ingress-addon-legacy-223875/client.crt: no such file or directory
E0130 21:01:42.761146   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/addons-663262/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-997045 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m32.7834149s)
--- PASS: TestNetworkPlugins/group/flannel/Start (152.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5v9tk" [bf4e468c-7071-446a-8779-4a7e14cda5b2] Running
E0130 21:01:51.865388   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.crt: no such file or directory
E0130 21:01:51.870712   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.crt: no such file or directory
E0130 21:01:51.880974   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.crt: no such file or directory
E0130 21:01:51.901266   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.crt: no such file or directory
E0130 21:01:51.941631   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.crt: no such file or directory
E0130 21:01:52.022005   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.crt: no such file or directory
E0130 21:01:52.182510   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.crt: no such file or directory
E0130 21:01:52.503050   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.crt: no such file or directory
E0130 21:01:53.144083   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.crt: no such file or directory
E0130 21:01:54.424252   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006771917s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-997045 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-997045 replace --force -f testdata/netcat-deployment.yaml
E0130 21:01:56.984918   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hkj5v" [5da0ce10-2f94-4aaf-876b-f495caa3b02a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0130 21:02:01.136627   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.crt: no such file or directory
E0130 21:02:02.105148   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-hkj5v" [5da0ce10-2f94-4aaf-876b-f495caa3b02a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005472084s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-997045 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-997045 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-997045 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-997045 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-997045 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2ht74" [29656cdb-9d40-4802-928b-cf463924e817] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0130 21:02:12.345640   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-2ht74" [29656cdb-9d40-4802-928b-cf463924e817] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004620121s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-997045 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-997045 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-997045 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (104.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-997045 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0130 21:02:32.825891   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-997045 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m44.5681969s)
--- PASS: TestNetworkPlugins/group/bridge/Start (104.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-997045 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-997045 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rb29c" [4b237115-df47-4d8e-abc6-daea43346aa4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rb29c" [4b237115-df47-4d8e-abc6-daea43346aa4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.239373702s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-997045 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-997045 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-997045 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-f6rpd" [c809e6d6-b6e6-48db-a93a-ee4d58da5539] Running
E0130 21:03:13.786765   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/default-k8s-diff-port-877742/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004696538s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-997045 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-997045 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fw5gs" [08600d1c-28ea-4ee1-a22a-b7b371c7dde4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fw5gs" [08600d1c-28ea-4ee1-a22a-b7b371c7dde4] Running
E0130 21:03:23.057494   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/no-preload-473743/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004006634s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-997045 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-997045 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0130 21:03:27.331087   11667 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4458/.minikube/profiles/old-k8s-version-150971/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-997045 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-997045 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-997045 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x2ttv" [c9d05ba0-9869-4104-9ab3-e94693d1b1f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x2ttv" [c9d05ba0-9869-4104-9ab3-e94693d1b1f6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003956116s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-997045 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-997045 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-997045 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (39/310)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
163 TestImageBuild 0
196 TestKicCustomNetwork 0
197 TestKicExistingNetwork 0
198 TestKicCustomSubnet 0
199 TestKicStaticIP 0
231 TestChangeNoneUser 0
234 TestScheduledStopWindows 0
236 TestSkaffold 0
238 TestInsufficientStorage 0
242 TestMissingContainerUpgrade 0
245 TestNetworkPlugins/group/kubenet 3.21
251 TestStartStop/group/disable-driver-mounts 0.17
260 TestNetworkPlugins/group/cilium 3.56
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-997045 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-997045

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-997045

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-997045

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-997045

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-997045

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-997045

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-997045

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-997045

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-997045

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-997045

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-997045

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-997045" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-997045" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-997045

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997045"

                                                
                                                
----------------------- debugLogs end: kubenet-997045 [took: 3.064620396s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-997045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-997045
--- SKIP: TestNetworkPlugins/group/kubenet (3.21s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-757744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-757744
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-997045 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-997045

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-997045

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-997045

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-997045

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-997045

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-997045

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-997045

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-997045

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-997045

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-997045

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-997045

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-997045" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-997045

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-997045

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-997045

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-997045

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-997045" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-997045" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-997045

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-997045" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997045"

                                                
                                                
----------------------- debugLogs end: cilium-997045 [took: 3.409925242s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-997045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-997045
--- SKIP: TestNetworkPlugins/group/cilium (3.56s)

                                                
                                    
Copied to clipboard